Bayesian Workshop 5
Invited Papers


The three invited papers to be presented at the workshop are:


``The Bayesian Analysis of the New York School Choice Scholarships Program: A Randomized Experiment with Noncompliance and Missing Data''

John Barnard (Statistics Department, Harvard University)
Constantine Frangakis (Statistics Department, Harvard University)
Jennifer L. Hill (Statistics Department, Harvard University)
David Myers(Mathematica Policy Research)
Paul Peterson (Government Department, Harvard and the Kennedy School of Government)
Donald B. Rubin (Statistics Department, Harvard University)
The supposed decline of the U.S. educational system, including its causes and solutions, has been a popular topic of debate in recent years. Part of the difficulty in resolving this debate is the lack of solid empirical evidence regarding the true impact of educational initiatives. For example, educational researchers rarely are able to engage in controlled, randomized experiments. The efficacy of so-called ``school choice'' programs has been a particularly contentious issue. A current multi-million dollar evaluation of the New York School Choice Scholarship Program (NYSCSP) endeavors to shed some light on this issue. This study can be favorably contrasted with other school choice evaluations in terms of the thought that went into the randomized experimental design (a completely new design, the Propensity Matched Pairs Design, is being implemented) and the rigorous data collection and compliance-encouraging efforts. In fact, this study benefits from the authors' previous experiences with the Milwaukee Parental Choice Program, which, although randomized, was relatively poorly implemented as an experiment. At first appearance, it would appear that the evaluation of the NYSCSP could proceed without undue statistical complexity. Unfortunately, this program evaluation, as is common in studies with human subjects, suffers from some complications:

  1. Non-compliance. Approximately 25% of children who were awarded scholarships decided not to use them.
  2. Missing data. Some guardians failed to complete fully survey information. Some children were too young to take pre-tests. Some children failed to show up for post-tests. Levels of missing data range approximately from 3 to 50% across variables.

Work by Frangakis and Rubin (1997) has revealed the severe threats to valid estimates of experimental effects that can exist in the presence of non-compliance and missing data, even for estimation of simple intention-to-treat effects. In addition, care must be taken in proper treatment of the longitudinal outcome data that will be gathered for each child because much interest focuses on the relative effectiveness of different lengths of exposures.

The technology we use to proceed with analyses of longitudinal data from a randomized experiment suffering from missing data and non-compliance involves the creation of multiple imputations, for both missing outcomes and missing covariates as well as missing true compliance statuses, using Bayesian models. The fitting of Bayesian models to such data, which is challenging because of the implications of randomization and the desire to handle the complications in a plausible manner, requires MCMC methods for missing data. A Bayesian approach allows for analyses that rely on weaker assumptions than standard approaches. Our methodology gives estimates of the causal effects of participating in school choice programs that account for the complications of non-compliance and missing data, giving policy makers an important piece of evidence about the usefulness of school choice programs for improving childrens' education.

Discussants: Stephen Ansolabehere,(Department of Political Science, MIT), Brian Junker (Department of Statistics, CMU).

``Bayesian Designs for Dose-Ranging Drug Trials''

Full Paper as Postscript#1 and Postscript#2
or PDF#1 and PDF#2
(NOTE: The paper is in two parts - the author's instructions are: "Insert file "Postscript#2" after page 14 of file "Postscript#1" -- please do the same if you are selecting PDF files)
Donald A. Berry Duke University,
Peter Mueller (Duke University)
Andy P. Grieve (Pfizer)
Michael Smith (Pfizer)
Tom Parke, (Tessella)
Richard_Blazek (Pfizer)
Neil Mitchard (Pfizer)
Chris Brearley (Pfizer)
Michael Krams (Pfizer)
I. BACKGROUND

Clinical drug trials are categorized within phases I, II, and III. Phase I addresses side effects and such issues as drug metabolism. Phase II addresses the relationships between dose and effectiveness and between dose and side effects. Phase III consists of large, randomized trials comparing one or two doses of the experimental drug with placebo (if possible) and perhaps also with therapies known to be active.

In the standard type of phase II efficacy trial, patients are assigned to a dose from among those being considered (usually 4 to 12 in number). Assignment is random, usually with equal numbers of patients assigned to each dose. Based on the results of the trial, a decision is made to either enter phase III in the drug's development, stop the drug's development, or conduct another phase II trial.

Such a design is inefficient, in terms of both time and resources. If the drug is effective then the dose-response curve has a positive slope for some dose. The sloping part of the curve may be located at doses greater than or less than those considered in the trial. In either case many of the observations at the opposite end from where the greatest slope occurs may be wasted. If the sloping part of the curve is in the middle of the doses considered and if the slope is large (relative to the interval between doses) then observations at both ends of the range are wasted. The case in which the drug is ineffective is similar to that in which a positive slope occurs at doses larger than those considered in the trial, namely, most of the observations at lower doses are wasted. Moreover, even if the doses considered are judged to be appropriate in retrospect, the variability in the responses may be greater or less than originally anticipated. In the former case the sample size chosen was too small and in the latter case it was unnecessarily large. Therefore, a common retrospective view of trials with this type of design is that a different allocation to doses would have been more informative, or a different sample size would have been more appropriate.

II. DESCRIPTION

We have developed an innovative class of designs that we are introducing into practice. In this case study we will describe the designs, address difficulties in implementing them in actual clinical trials, and relay our experience with using them--including experience with the FDA and other regulatory agencies.

For simplicity, the description below considers "dose" to be one-dimensional. In practice it can be multidimensional and, for example, it could include dosing frequency and duration of administration. Our designs can be viewed as having two stages; the first involves dose finding and the second is confirmatory.

A. Dose Finding Stage

The first stage allows for a wide range and a large number of doses, including placebo. The purpose of this stage is to assess dose-response in an informative and efficient way. Assignment to dose is sequential, in the following sense. As patients are treated, they are followed and their responses are communicated to a central database. Doses are assigned to subsequent patients so as to obtain maximal and rapid information about the dose-response curve. As time passes, the "current" posterior distributions of the various parameters are updated (on a daily basis). The dose of the next patient is assigned centrally. The endpoint of interest is response at a fixed number N of days following treatment. In our calculational procedure we impute missing N-day responses using their predictive distributions given current responses and given assigned doses.

The next patient's dose depends on the currently available response information formalized in the posterior distribution of unknown parameters. But it also depends on the existence of the current "information bank." At any particular time, patients will have been assigned to various doses and will have responded only partially in the sense that their N-day outcome is not yet known. Such patients serve as a bank of information about the doses assigned. This information will become known gradually, with patients having maturing outcome data being replaced in the information bank with recently treated patients. Our assignment scheme takes into consideration the existence of and doses assigned to patients in this information bank.

The assignment algorithm used is complicated to describe. Roughly speaking, it starts with a wide range of doses and homes in on a narrower range as it learns about critical features of the dose-response curve. If this narrower range is the set of highest doses and the results are sufficiently poor then the algorithm recommends stopping the trial.

B. Confirmatory Stage

To qualify with regulatory agencies as a "pivotal" phase III trial, a large number of patients must be randomized to drug and placebo. As information accrues about dose-response from the dose-finding stage, if this information is suggestive that the drug is effective then the assignment procedure shifts to a confirmatory stage. Two doses will be identified based on the dose-response information and patients will be randomized to these two doses and placebo in a balanced fashion. The shift will be seamless and not recognizable by physicians and others involved in the trial (except for members of the trial's data and safety monitoring committee).

The timing of the shift from dose-finding to pivotal is critical. Whether to shift will be based on a Bayesian decision analysis using forward simulation and dynamic programming. A decision to shift will depend on the available information about dose-response, the costs of entering additional patients, and the requirements of the FDA and other regulatory agencies concerning information needed for eventual marketing approval of the drug. Although the design and the determination of the sample size of the pivotal stage is Bayesian, but the decision analysis recognizes the need to provide regulatory agencies with a frequentist analysis of the trial results.

III. BENEFITS AND POTENTIAL IMPACT

Our efficient dose assignment scheme more accurately identifies effective drugs and it more accurately identifies ineffective drugs. Moreover, efficient dose assignment can significantly shorten a drug's clinical development. First, the number of patients in a sequential trial will usually be substantially smaller than when using standard designs. This has important economical and ethical implications. Second, a seamless transition between the dose-finding and confirmatory stages eliminates the time required to set up a second trial.

The performance of a sequential design is enhanced by rapid transmittal of information. Our formal use of probability modeling in the dose finding phase allows the use of early, partial information and does not require waiting for a full N days to observe the final response.

We will consider a particular class of drugs. However, the problem we address and our solution applies to all classes of drugs. Therefore, the impact of our design is potentially far reaching. And it suggests that our presentation will attract attendees from throughout the pharmaceutical industry.

Discussants: Sam Wieand (NSABP) and Kathryn Chaloner (School of Statistis, University of Minnesota).

``Modeling the Impact of Traffic-Related Air Pollution on Childhood Respiratory Illness''

Full Paper as Postscript or PDF
Nicky Best (Statistician, Imperial College UK)
Samantha Cockings (Geographer, Imperial College UK)
Paul Elliott (Clinical Epidemiologist, Imperial College UK)
Katja Ickstadt (Statistician, University of North Carolina USA)
Robert Wolpert (Statistician, Duke University USA)
BACKGROUND

The possible harmful health effects of outdoor air pollution from industrial sources and road traffic are the focus of great public concern and scientific attention in the United Kingdom. Indeed, the goal of reducing the harmful health impact associated with outdoor air quality is a principal issue raised in the UK government's "National Environmental Health Action Plan", drawn up under the auspices of the World Health Organization. The fundamental scientific question affecting UK policy decisions is whether and how recent apparent increases in the rates of childhood respiratory illness are related to atmospheric pollution arising from motor vehicle emissions.

The evidence for such a link remains equivocal, due in large part to a host of methodological difficulties that have beset scientific investigations of the relationship. These include major problems of estimating exposure; confounding due to possibly unknown or unmeasured risk factors; and the potential for bias arising from approximation and aggregation in the data. These methodological obstacles must be overcome before an accurate quantitative assessment of the effects of traffic pollution on health can be made available to inform public health policy, transport planning and environmental legislation regulating vehicle emissions.

In this Case Study we present an epidemiological investigation of hospital admission rates for respiratory illnesses in children under one year old living in Greater London, and relate these rates to measures of exposure to traffic- related air pollution. We help resolve the fundamental scientific question of how childhood respiratory illness rates are related to motor vehicle emissions by studying six specific issues:

  • Do infants living in areas of high traffic density have higher rates of respiratory illness?
  • If so, which pollutants or vehicle types are associated with the greatest risk?
  • Can we quantify this risk reliably?
  • Do individual characteristics such as gender or ethnic origin increase or modify this risk?
  • Is the disease-exposure relationship confounded by known risk factors such as socioeconomic deprivation?
  • Can we account for residual confounding due to unknown or unmeasured risk factors?
To answer these questions we construct a hierarchical Bayesian semi-parametric spatial point-process regression model supporting inference about the relationship of children's respiratory illness rates to traffic pollution and other risk factors. We resolve the exposure estimation problem by employing sophisticated geographical information system (GIS) methods to build and validate exposure extrapolation models; we resolve the confounding problem by including in our model a latent spatially varying risk factor; and we reduce or avoid the problem of ecological bias by using each source of data at its natural level of spatial resolution, without any unnecessary approximation or aggregation.

Our presentation will include a detailed discussion of the epidemiological background and available data, including our use of GIS methods to develop suitable measures of exposure based on the raw data (traffic flows and vehicle emission factors). We will then review briefly existing methods for the epidemiological analysis of exposure-response relationships and discuss some of the associated methodological problems. This will motivate the modeling approach with which we address the six questions.

OUR DATA

Several datasets were required for this study. These include counts of hospital admissions for respiratory illness in children aged under one year living in Greater London (available from the UK Department of Health), aggregated by postcode of residence (a single UK postcode covers a median of twelve households). Individual attributes such as gender and ethnic origin were collected for each case; the Carstairs index for electoral wards (derived from the 1991 UK Census) was used to provide an area-level covariate reflecting socioeconomic deprivation. Population density data for under one-year-old children (needed for studying population-based disease rates) were based on the number of births in the preceding year (UK Office for National Statistics birth registration database), given at the electoral ward level. Data on exposure to traffic pollution were taken from recordings of vehicle flows on all major and minor roads in Greater London. These were linked to the digitized London road network using GIS methods to provide a line source exposure model of traffic volume. An alternative exposure model was developed by averaging traffic flows over kilometer squares to provide a grid of traffic density scores covering the study region. Corresponding line- and grid-based exposure models for specific pollutants were also developed using published emission factors to convert traffic flow into pollutant load. Thus some of our data and covariates are given at the individual level and others at one of four non-nested group levels (postcode, electoral ward, road network and kilometer grid).

TRADITIONAL APPROACHES TO MODELING EXPOSURE-RESPONSE RELATIONSHIPS IN EPIDEMIOLOGY

There are two traditional approaches to exploring exposure-response relationships in epidemiology. One is individual-level studies of case control or cohort design which relate disease incidence to individual level covariates and suspected environmental risk factors. Unfortunately this requires individual measurement of the environmental exposure for each subject in the study (which may prove costly or even impossible to obtain) and offers no way to discover and exploit geographical correlation and trends that might be associated with spatially-varying confounders and unobserved risk factors.

The second is group-level studies (also called disease mapping, ecological or small area studies) in which geographical variations in disease occurrence are related to the geographical distribution of suspected risk factors by relating disease counts aggregated over geographical units to aggregated covariate summaries. This approach offers the advantages of requiring only environmental exposure averages over geographical areas (which are often available even when individual level exposure data are not) and of exploiting spatial correlation in the data. However, inference may be sensitive to the choice of geographical units selected for analysis, and aggregation may distort or mask the true exposure-response relationship for individuals - a phenomenon known as ecological bias.

The limitations of both the individual and group level approaches stem in part from their inability to exploit fully the raw data at their natural level of aggregation. Our data are typical of many epidemiological studies which rely on observational data measured at multiple, non-nested spatial scales. Analytic methods which require all data to be measured at the individual level are thus not always feasible, while aggregating all data to a common spatial scale for group level analysis will introduce approximation errors that affect and may distort inference.

OUR MODEL

The Bayesian spatial modeling approach adopted for this case study relates each quantity to an underlying continuous-parameter random field. We use a marked point process to model the location of each case of infant respiratory illness along with case-specific individual attributes. Our choice of a doubly- stochastic Poisson/gamma point process, with infinitely-divisible random fields at each level of the hierarchy and identity-link for our Poisson regression, scales linearly to include each covariate at its given level of spatial aggregation and to offer coherent inference at arbitrary levels of spatial resolution. We include a latent spatially-varying random field as a nonparametric regression component, to capture autocorrelation and to adjust for confounding due to spatially-varying latent causes. The result is a Bayesian semiparametric identity-link Poisson/gamma hierarchical marked point process regression model for cases, individual attributes, and spatially-varying risk factors.

We developed this modeling approach in a recent application to a study on self-reported wheezing illness in school children in northern England, using an extrapolated grid of nitrogen dioxide concentrations as a measure of exposure to traffic-related air pollution. Here we apply and extend the approach in a number of ways, to accommodate spatial correlation and measurement error in the observed exposures (traffic volume/density and pollutant emissions) and to adjust for the confounding effect of distance to nearest hospital as a predictor of the risk of admission.

WIDER CONTEXT

This work is being developed within the context of the UK Small Area Health Statistics Unit (SAHSU) at Imperial College. SAHSU is an independent national facility funded by the UK Government to carry out scientific investigations of disease in relation to sources of environmental pollution, particularly in areas where there has been public or media concern. The results of this study, and the development of improved methodology to conduct similar investigations for other disease-exposure relationships, are thus of direct relevance to the Unit's work, and will help to inform policy makers in the UK Government Departments of Health and the Environment who fund SAHSU.

One of the presenters (NB) is also part of the team responsible for developing the BUGS software for Bayesian statistical modeling. We intend to implement the methods presented in the case study in a specialist version of the BUGS software for spatial analysis, which is currently under development.

WORK SCHEDULE

GIS modeling of traffic data: The line- and grid-based models of traffic density are already complete, and the corresponding models of pollutant load from vehicle emissions will be completed by early March 1999.

Exposure-response modeling using traditional methods: We have already completed a small area (group-level) Bayesian analysis of the effects of traffic density on hospital respiratory admissions in children aged less than one year using these data. Our analysis of the effect of vehicle emissions will be complete by the end of March 1999.

Spatial random field modeling: Our hierarchical Bayesian semi-parametric spatial marked point process regression model, developed for our earlier study of self-reported wheezing illness in school children in northern England, has already been extended to incorporate spatial correlation and measurement error in the observed exposures. We will complete our analysis in April, 1999.

Discussants: Francesca Dominici (Department of Biostatistics, Johns Hopkins School of Public Health), Jamie Robins

The previous four Workshops provided extended presentation and discussion on diverse topics.
Back to the top of the page
Back to the Bayes99 Home Page