(See CV for a complete list of talks and presentations.)
The classical spacetime manifold of general relativity disappears in quantum gravity, with different research programs suggesting a variety of different alternatives in its place. As an illustration of how philosophers might contribute to an interdisciplinary project in quantum gravity, I will give an overview of recent philosophical debates regarding how classical spacetime "emerges." I will criticize some philosophers as granting too much weight to the intuition that a coherent physical theory must describe objects as located in space and time. I will further argue, based in part on historical episodes, that an account of emergence needs to recover the structural features of classical GR responsible for its empirical success. This is more demanding than it might at first appear, although the details of recovery will differ significantly among different approaches to quantum gravity.
Are physicists using machine learning (ML) techniques like a hero consulting an oracle? Although the oracle speaks the truth, that is not sufficient to guide action. The oracle’s cryptic statements cannot be interpreted properly until, tragically, the hero’s fate has already been decided. There seems to be a similar tension between the capacities of some ML techniques and the goals of fundamental physics. As with the oracle, in many cases ML techniques have achieved extremely high accuracy. Yet the workings of these models are often a "black box" – accuracy is accompanied, again, by inscrutability or opacity. An uninterpretable answer to a fundamental problem threatens to be as useless as the oracle’s pronouncements. I will pursue two related lines of argument in response to this pessimistic view. First, in many applications the opacity of an ML technique is not an obstacle to guiding further research. For example, a "black box" technique can discover new physical quantities that are relevant to explaining patterns in the data, perhaps spurring different types of physical models. There are several ways opaque ML results can enrich our picture of a physical system. Second, philosophers have recently proposed that we should think of scientific understanding as a form of mastery – an ability to grasp how a system will respond in a variety of situations. Understanding in this sense requires an ability to extend models to new domains, but it does not require full transparency. ML methods may provide understanding of novel domains, in this sense, without also providing a "solution" (in the form of a simple, easily interpreted model).
Inflationary cosmology has been widely accepted for decades. Yet there are persistent debates about inflation which raise central questions in philosophy of science. Skeptics have often expressed doubt regarding whether inflation is "testable" or "falsifiable," due to the flexibility of inflationary models. This is an instance of a general question in philosophy of science: to what extent does phenomenological success support the claim that a theory gets the physics right? How does one answer a skeptical worry, that the theory "fits the data" because it is flexible? My aim in this talk is reframe this debate, drawing on ideas from George Smith’s historical and philosophical assessment of celestial mechanics. Smith answers the skeptic by looking at the role a theory plays in guiding inquiry. Astronomers "closed the loop" by starting with an initial description of motions; using discrepancies with observations to identify sub-dominant physical details; incorporating these details into a more refined description; and then starting the process over again. Through this process astronomers discovered hundreds of new details about the solar system, based on assuming the theory of gravity, that could be checked independently. Considering this case helps to characterize one challenge facing theories of the early universe: our lack of clarity about the underlying physics driving inflation has blocked pursuit of a similar process of iterative refinement. I will close by considering several different responses to this challenge.
How can we assess the reliability of the extremely complex simulations that play a central role in diverse areas of scientific research? Philosophers have recently debated whether simulation science can respond effectively to a novel form of holism: it is challenging to isolate the contributions of distinct modules or components making up simulations. They are "epistemically opaque": we cannot easily trace through the impact of changing a parameter, tweaking a part of code, or altering some aspect of the physical model. Lack of insight into how different modules work together, which Lenhard and Winsberg call "fuzzy modularity," makes it difficult to determine overall reliability, even if each component is independently well understood. We argue that, while the fuzzy modularity of complex simulations does indeed undermine the use of verification and validation to ensure the trustworthiness of simulations, the spectrum of methodologies available to test the reliability of simulations is broader. Other procedures, that have not been recognized and studied by philosophers but have nonetheless been used by scientists, make it possible to assess reliability even when it is extremely difficult to pursue a divide and conquer strategy (that is, to break down a computer simulation into local components and evaluate each of them independently). Our main aim below is to explicate in detail one such methodology, that we call "crucial simulations,” drawing on examples of its use in astrophysics. We will analyze the features of this methodology that make it possible to respond effectively to the holistic challenges posed by fuzzy modularity.
Contemporary cosmology pursues several ambitious aims, including uncovering new aspects of fundamental physics based on their role in the very early universe. The success cosmologists have had in pursuing these aims is particularly striking in light of evidential challenges they face. To overcome these challenges, cosmologists have revisited basic questions about what constitutes an acceptable scientific theory, what explanatory demands a theory should meet, and how to understand theory confirmation in a domain to which we have such limited access. Contemporary cosmology reflects a set of distinctive and interesting answers to these questions – an implicit philosophy of science, so to speak – that has guided research. Articulating and assessing this set of views is the primary goal of a monograph I am co-writing with Jim Weatherall. In this talk, I will elucidate two aspects of the philosophical views guiding research in early universe cosmology, and then assess recent debates about the testability of inflation in light of them.
According to inflationary cosmology, the very early universe passed through a transient phase of exponential expansion, leading to several characteristic features in the post-inflationary state, compatible with observations. This phase of exponential expansion is sourced semi-classically by the stress-energy tensor of the inflaton field, but inflation requires going beyond a semi-classical description. In particular, inflation generates Gaussian perturbations with a nearly scale-invariant spectrum via back-reaction of quantum fluctuations of the inflaton field on the gravitational field. A mean-field description sensitive only to the expectation value of the stress-energy tensor will miss these fluctuations. Standard accounts instead describe the generation of primordial fluctuations in terms of linear perturbations of the inflaton and metric field to a background spacetime. Yet there are several questions regarding the domain of applicability of these methods from the perspective of quantum gravity. Here I will focus on two distinct aspects of these debates: the robustness of the inflationary account of the generation of perturbations to assumptions regarding the initial state, and the self-consistency of the dynamical evolution.
Several philosophers have advocated an eliminativist position regarding gravitational energy and conservation principles applied to it. We cannot directly characterize the energy carried by the gravitational field with a local quantity analogous to what is used in other field theories: we cannot define a gravitational energy-momentum tensor that assigns local properties to spacetime points and can be integrated over volumes to characterize energy-momentum flows. Because of the equivalence principle, we can always choose a locally freely falling frame, and by so doing locally transform away the gravitational field. The eliminitavists take these features to imply that there is no such thing as "gravitational energy" or integral conservation laws governing it, and that efforts to resurrect such a notion illustrate how misleading it can be to treat general relativity as analogous to other field theories. In this talk I will consider how quasi-local definitions of energy and conservation laws based on them support a response to the eliminativists, and in particular concerns about whether such proposals depend on "background structure" in a problematic sense. Quasi-local energy and conservation laws depend on background structure — we need a way to designate some motions as "freely falling," so that energy-momentum transfers can be measured via departures from these trajectories. But I will argue that these background structures can justifiably be introduced within particular modeling contexts. The challenge regarding gravitational energy then has a different character: namely that there are many conflicting proposals for how to define quasi-local energy, and it is not clear whether they deliver consistent verdicts.
Eliminative reasoning is an appealing way to establish a theory: observations rule out all the competitors, leaving one theory standing. This only works, however, if we have taken all the alternatives into account. There have been long-standing debates in philosophy regarding the upshot and limitations of eliminative arguments. In this talk, I will defend the virtues and clarify the limitations of eliminative reasoning, based on seeing how it has been used in gravitational physics. I will consider one case study of eliminative reasoning in detail, namely efforts to show that general relativity (GR) provides the best theory of gravity in different regimes. Physicists have constructed parametrized spaces meant to represent a wide range of possible theories, sharing some core set of common features that are similar to GR. I draw three main points from this case study. First, the construction of a broad space of parametrized alternatives partially counters the “problem of unconceived alternatives” (due to Duhem and Stanford). Second, this response is only partially successful because the eliminative arguments have to be considered in the context of a specific regime. Solar system tests of gravity, using the PPN framework, favour GR – or any competing theories that are equivalent to it within this regime. But, third, eliminative arguments in different regimes may be complementary, if theories that are equivalent in one regime can be distinguished in other regimes. These three points support a qualified defense of the value of eliminative reasoning.
This talk considers the historical development of observational cosmology up to the 1960s, pursuing two main themes. Early work in relativistic cosmology characterized the effect of spacetime geometry on the appearance of distant objects – e.g., cosmological red-shift as a function of distance. Results of this form are unsatisfying because they hold only for an exact spacetime geometry, and it is clear that the actual universe departs from any of these exact models. McCrea and McVittie initiated a program of deriving observational relations that hold for a broad class of solutions, not only in the highly symmetric FLRW models, culminating in the work of Kristian and Sachs. This first line of work makes it possible to describe cosmological observations in a spacetime geometry approximating that of the real universe. The second theme regards the scope of cosmological observations. Physical cosmology succeeded in establishing a standard model by shifting away from a reliance on galaxies as tracers of large-scale spacetime geometry. Lemaitre had considered the effect of cosmological evolution on a wide variety of physical processes, but his results were limited and speculative. In light of other developments in physics, in the 60s it was possible to use these alternative routes – including primordial element abundances and the background radiation - as strong evidence in favor of the big bang model.
Eliminative reasoning is an appealing way to establish a theory: observations rule out all the competitors, leaving one theory standing. This only works, however, if we have taken all the alternatives into account. There have been long-standing debates in philosophy regarding the upshot and limitations of eliminative arguments. In this talk, I will defend the virtues and clarify the limitations of eliminative reasoning, based on seeing how it has been used in gravitational physics. I will consider one case study of eliminative reasoning in detail, namely efforts to show that general relativity (GR) provides the best theory of gravity in different regimes. Physicists have constructed parametrized spaces meant to represent a wide range of possible theories, sharing some core set of common features that are similar to GR. I draw three main points from this case study. First, the construction of a broad space of parametrized alternatives partially counters the “problem of unconceived alternatives” (due to Duhem and Stanford). Second, this response is only partially successful because the eliminative arguments have to be considered in the context of a specific regime. Solar system tests of gravity, using the PPN framework, favour GR – or any competing theories that are equivalent to it within this regime. But, third, eliminative arguments in different regimes may be complementary, if theories that are equivalent in one regime can be distinguished in other regimes. These three points support a qualified defense of the value of eliminative reasoning.
An appealing just-so story tells us that the content of a scientific theory – what it says about the observable – can be deduced from its basic postulates, with the assistance of auxiliary assumptions. Theories are successful to the extent that these consequences match what we see. Although it is initially plausible, there are several reasons why this just-so story needs to be replaced. It fails by over-estimating the extent to which we can survey the content of our theories. We typically assess theories based on understanding their consequences for a few tractable cases. It also under-estimates the role of theory in guiding ongoing inquiry, leading to an impoverished conception of success. I will sketch an alternative approach, indebted primarily to Howard Stein and George Smith. On this view, understanding content begins with representing the observer as a “measuring apparatus” of sorts. Theories extend our reach by making it possible to reliably measure new fundamental quantities the theory introduces. Specifying the content requires a model of how we interact with a target system. The resulting picture of the nature of scientific theories, and the challenges to fully specifying their content, leads to a different perspective on theory choice. I will illustrate these general themes with two cases from the history of physics, the development of celestial mechanics and contemporary cosmology.
Precision testing of the quantum electrodynamics (QED) and the standard model provides some of the most secure knowledge in the history of physics. These tests can also be used to constrain and search for new physics going beyond the standard model. We examine the evidential structure of relationships between theoretical predictions from QED, precision measurements of these phenomena, and the indirect determination of the fine structure constant. We argue that "pure QED" is no longer sufficient to predict the electron’s anomalous magnetic moment, and that standard model effects are needed. The details relevant to low-energy QED are robust against future theory change.
In the history of physics, available data has often been sufficient to justify a theory as correct, at least within some domain. Yet in many areas of current fundamental physics, our lack of access to the most revealing regimes implies that experiments and observations may not provide useful guidance. Here I will briefly defend a response to this situation that contrasts with Dawid’s recent defence of non-empirical confirmation. I concur with Dawid that limits to underdetermination are an essential part of justifying theories. But I have a different assessment of how physicists have successfully responded to underdetermination in the past, and what this implies for current physics. I will discuss early universe cosmology to illustrate these contrasting approaches.
Stein has characterized one of the central problems in accounting for our knowledge in physics as that of getting the laboratory, or observatory, inside the theory – that is, of understanding how the mathematical structures of fundamental physical theories have empirical content. He has argued that physicists respond to this problem by giving schematic representations of observers and experiments. In addition, Stein emphasizes the importance of regarding knowledge as an enterprise, with current theories providing guidance for future inquiry. I will explore some ramifications of this way of thinking about the structure of scientific theories for contemporary cosmology. One goal of observational cosmology is to measure the six basic parameters appearing in the standard model of cosmology. These parameters are well-defined if the universe is suitably approximated at some scale by a perturbed FLRW model. The enormous extrapolations involved in the standard model are often justified by the consistent determination of these parameters via a variety of methods. Here I will consider two recent debates regarding this approach to cosmology, inspired by Stein’s work. The first debate regards the impact of different ways of characterizing the propagation of light through a cosmological spacetime on the determination of cosmological parameters (such as H_0). The second regards how the highly symmetric FLRW models relate to describing the real universe, at small scales where it is very lumpy.
My main aim is to articulate Newton’s distinctive brand of empiricist metaphysics, exemplified in his discussion of the nature of time. On this Newtonian approach, questions about the nature of time become intricately linked to empirical enquiry. Newtonian time is a refinement of some aspects of everyday thinking about time, under the pressure of new problems brought to the fore by advances in the study of motion. A secondary aim is to respond to recent discussions of the relationship between the account of time required for Newton’s empirical project, and for his theological views. Recently Schliesser and Janiak have both argued that some aspects of Newtonian time are justified based on theological considerations. I will argue against this reading of Newton, but my main interest is in what the debate reveals regarding Newton’s approach and implicit assumptions.
Newton’s natural philosophy broke with the mechanical philosophy dominant earlier in the seventeenth century in several ways. This talk will focus on the status of the sensible qualities of objects in Newton’s work. The is a pressing problem because Newton’s innovations in natural philosophy undermine the dominant view of sensible qualities among his contemporaries in (at least) two ways. Mechanical philosophers, from Galileo onwards, held that the constituents of matter (whether atomistic or corpuscular) share the sensible qualities of dry-goods sized objects given in ordinary experience. A preferred set of these, primary qualities such as size, shape, and motion, then provide a sufficient basis for explanatory accounts of our experience. By contrast, Newton’s work directly undercuts the explanatory accounts of experience offered by the mechanical philosophers. The geometrical properties favored by the mechanical philosophers play almost no role within his system of natural philosophy. Second, and more importantly, the Principia treats the experience of bodies in terms of theoretical quantities that are not directly manifested in experience. For example, the mass of an object is a quantity defined within the theoretical framework provided by the laws of motion. One can infer the mass from a body’s motions granted an inertial reference frame and identification of the relevant forces responsible for the body’s motion. I will argue that in light of these two points, the logical character of the contribution sense experience makes to natural philosophy must be different that that assumed by the mechanical philosophers.
For several decades, cosmologists treated the early universe as the "poor man’s accelerator," providing a means to test high energy physics in the course of reconstructing its history. What are the challenges to establishing the physics of the early universe, based on cosmological observations limited to a finite universe? In many areas of physics we have very strong evidence for accepting current theories, at least as approximations accurate within some restricted domain. The contrast with these cases of exemplary evidence will help to identify general challenges to establishing theories of early universe with a comparable level of certainty. I will further argue that multiverse theories, such as eternal inflation, undermine one way of responding to these challenges.
I will defend the view that structure on the space of theoretical models is needed to understand how theories represent nature. Many other views, by contrast, locate empirical content within a single model. There are several ways in which the need for structure on the space of models can come to light, but I will focus on measurement. Physical theories provide us with an account of what systems can be used to reliably measure some fundamental quantity introduced by the theory, and over what domains they can be successfully applied. Assessing the reliability of measurements characterized in this way requires claims that extend beyond a single model, since these implicitly consider a range of counterfactual circumstances. Capturing this modal dimension of measurement requires an appeal to structures defined on the space of models. Philosophers have been far too willing to regard assessments of instrumental reliability as part of the messy details of scientific work that can be neglected in considering the structure of scientific theories. On my alternative view, putting these questions front and center leads to a strikingly different account of empirical content, with implications for underdetermination and continuity through theory change. I will sketch the view, consider several objections to it, and consider some of these implications.
This talk aims to provide an overview of recent work on gauge symmetry among philosophers of physics, by discussing the strengths and weaknesses of three different positions: the fundamentalist, who identifies gauge symmetry as a fundamental feature of our best quantum field theories, with a status comparable to spacetime symmetries; the eliminativist, who holds that gauge symmetry represents mere descriptive redundancy or excess structure, whose elimination clarifies interpretational issues; and the pragmatist, who regards gauge symmetry as a useful heuristic or a crucial aid to quantization. One overall theme of the discussion will be the extent to which the evaluation of these positions depends on open issues in mathematical physics.
There is widespread agreement within the physics community that the cosmological constant problem is a crisis in theoretical physics. My aim is to clarify the nature of the problem. The disastrous prediction follows from treating the vacuum energy density of quantum fields as a source of the gravitational field. Two aspects of this inference are troubling. First, the connection between vacuum energy density and the successful predictions of quantum field theory is tenuous — the Casimir effect, Lamb shift, and so on fail to establish that the zero-point energies of quantum fields contribute to the vacuum energy density. Second, the vacuum energy density contributes to Einstein’s field equations for general relativity as an effective cosmological constant term. This way of coupling the vacuum energy density is not mandatory, and there are also questions about the nature of a vacuum state in a general curved spacetime. In effect, I will treat the cosmological problem is an instance of a general methodological problem that arises in combining theories: the need to identify surplus structure that can be eliminated in the new theory.