Rebecca A Betensky

Rebecca Betensky
Rebecca Betensky
Scroll

Chair of the Department of Biostatistics

Professor of Biostatistics

Professional overview

Prior to NYU, Dr. Betensky was Professor of Biostatistics at the Harvard T.H. Chan School of Public Health. She was director of the Harvard Catalyst (Clinical and Translational Science Award) Biostatistics Program; director of the Data and Statistics Core for the Massachusetts Alzheimer’s Disease Research Center; and director of the Biostatistics Neurology Core at Massachusetts General Hospital. Previously, she was the Biostatistics Program Leader for the Dana-Farber/Harvard Cancer Center.

Dr. Betensky’s research focuses on methods for the analysis of censored and truncated outcomes and covariates, which frequently arise from the subsampling of cohort studies. She has a long-time interest in clinical trials, and has written on the evaluation of biomarkers and the use and interpretation of p-values. She has collaborated extensively in studies in neurologic diseases, and serves as statistical editor for Annals of Neurology.

Dr. Betensky was awarded, and directed for 15 years, an NIH T32 training program in neurostatistics and neuroepidemiology for pre- and post-doctoral students in biostatistics and epidemiology and for clinician-scientists. She previously directed Harvard’s Biostatistics programs to promote and support diversity at all levels in the field of quantitative public health. She was also a member of the BMRD Study Section for review of NIH statistical methodology grants; on committees for the Institute of Medicine; and a co-chair of the technical advisory committee for the scientific registry of transplant recipients.

Dr. Betensky an elected Fellow of the American Statistical Association and of the International Statistical Institute, and is a past recipient of the Spiegelman Award from the American Public Health Association. She currently serves as a member of the Board of Scientific Counselors for Clinical Science and Epidemiology at the National Cancer Institute.

Education

AB, Mathematics, Harvard University, Cambridge, MA
PhD, Statistics, Stanford University, Stanford, CA

Areas of research and study

Biology
Biostatistics
Neuroepidemiology
Neurology
Neurostatistics
Translational science

Publications

Publications

Using conditional logistic regression to fit proportional odds models to interval censored data

Rabinowitz, D., Betensky, R. A., & Tsiatis, A. A. (n.d.).

Publication year

2000

Journal title

Biometrics

Volume

56

Issue

2

Page(s)

511-518
Abstract
Abstract
An easily implemented approach to fitting the proportional odds regression model to interval-censored data is presented. The approach is based on using conditional logistic regression routines in standard statistical packages. Using conditional logistic regression allows the practitioner to sidestep complications that attend estimation of the baseline odds ratio function. The approach is applicable both for interval-censored data in settings in which examinations continue regardless of whether the event of interest has occurred and for current status data. The methodology is illustrated through an application to data from an AIDS study of the effect of treatment with ZDV+ddC versus ZDV alone on 50% drop in CD4 cell count from baseline level. Simulations are presented to assess the accuracy of the procedure.

A non-parametric maximum likelihood estimator for bivariate interval censored data

Betensky, R. A., & Finkelstein, D. M. (n.d.).

Publication year

1999

Journal title

Statistics in Medicine

Volume

18

Issue

22

Page(s)

3089-3100
Abstract
Abstract
We derive a non-parametric maximum likelihood estimator for bivariate interval censored data using standard techniques for constrained convex optimization. Our approach extends those taken for univariate interval censored data. We illustrate the estimator with bivariate data from an AIDS study.

An extension of Kendall's coefficient of concordance to bivariate interval censored data

Betensky, R. A., & Finkelstein, D. M. (n.d.).

Publication year

1999

Journal title

Statistics in Medicine

Volume

18

Issue

22

Page(s)

3101-3109
Abstract
Abstract
Non-parametric tests of independence, as well as accompanying measures of association, are essential tools for the analysis of bivariate data. Such tests and measures have been developed for uncensored and right censored failure time data, but have not been developed for interval censored failure time data. Bivariate interval censored data arise in AIDS studies in which screening tests for early signs of viral and bacterial infection are done at clinic visits. Because of missed clinic visits, the actual times of first positive screening tests are interval censored. To handle such data, we propose an extension of Kendall's coefficient of concordance. We apply it to data from an AIDS study that recorded times of shedding of cytomegalovirus (CMV) and times of colonization of mycobacterium avium complex (MAC). We examine the performance of our proposed measure through a simulation study.

Clinical trials using HIV-1 RNA-based primary endpoints: Statistical analysis and potential biases

Marschner, I. C., Betensky, R. A., DeGruttola, V., Hammer, S. M., & Kuritzkes, D. R. (n.d.).

Publication year

1999

Journal title

Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology

Volume

20

Issue

3

Page(s)

220-227
Abstract
Abstract
Clinical trial endpoints based on magnitude of reduction in HIV-1 RNA levels provide an important complement to endpoints based on percentage of patients achieving complete virologic suppression. However, interpretation of magnitude of reduction can he biased by measurement limitations of virologic assays, particularly lower and upper limits of quantification. Using data from two AIDS Clinical Trials Group (ACTG) studies, widely used crude methods of analyzing HIV-1 RNA reductions were compared with methods that take into account censoring of HIV-1 RNA measurements. Such methods include Kaplan- Meier and censored regression analyses. It was found that standard crude methods of analysis consistently underestimated treatment effects. In some cases, the bias induced by crude methods masked statistically significant differences between treatment arms. Although statistically significant, adjustment for baseline HIV-1 RNA levels had little effect on estimated treatment differences. Furthermore, convenient parametric analyses performed as well as more complex nonparametric analyses. It is concluded that conveniently implemented censored data analyses should be conducted in preference to widely used crude analyses of magnitude of HIV-1 RNA reduction. To obtain complete information about virologic response to antiretroviral therapy, such analyses of magnitude of virologic response should be used to complement analyses of the percentage of patients having complete virologic suppression.

Local EM estimation of the hazard function for interval-censored data

Betensky, R. A., Lindsey, J. C., Ryan, L. M., & Wand, M. P. (n.d.).

Publication year

1999

Journal title

Biometrics

Volume

55

Issue

1

Page(s)

238-245
Abstract
Abstract
We propose a smooth hazard estimator for interval-censored survival data using the method of local likelihood. The model is fit using a local EM algorithm. The estimator is more descriptive than traditional empirical estimates in regions of concentrated information and takes on a parametric flavor in regions of sparse information. We derive two different standard error estimates for the smooth curve, one based on asymptotic theory and the other on the bootstrap. We illustrate the local EM method for times to breast cosmesis deterioration (Finkelstein, 1986, Biometrics 42, 845-854) and for times to HIV-1 infection for individuals with hemophilia (Kroner et al., 1994, Journal of AIDS 7, 279-286). Our hazard estimates for each of these data sets show interesting structures that would not be found using a standard parametric hazard model or empirical survivorship estimates.

Maximally selected χ2 statistics for k x 2 tables

Betensky, R. A., & Rabinowitz, D. (n.d.).

Publication year

1999

Journal title

Biometrics

Volume

55

Issue

1

Page(s)

317-320
Abstract
Abstract
It is common in epidemiologic analyses to summarize continuous outcomes as falling above or below a threshold. With such a dichotomized outcome, the usual χ2 statistics for association or trend can be used to test for equality of proportions across strata of the study population. However, if the threshold is chosen to maximize the test statistic, the nominal χ2 reference distributions are incorrect. In this paper, the asymptotic distributions of maximally selected χ2 statistics for association and for trend for the k x 2 table are derived. The methodology is illustrated with data from an AIDS clinical trial. The results of simulation experiments that assess the accuracy of the asymptotic distributions in moderate sample sizes are also reported.

Predictive value of CD19 measurements for bacterial infections in children infected with human immunodeficiency virus

Betensky, R. A., Calvelli, T., & Pahwa, S. (n.d.).

Publication year

1999

Journal title

Clinical and Diagnostic Laboratory Immunology

Volume

6

Issue

2

Page(s)

247-253
Abstract
Abstract
We investigated the predictive value of CD19 cell percentages (CD19%) for times to bacterial infections, using data from six pediatric AIDS Clinical Trials Group protocols and adjusting for other potentially prognostic variables, such as CD4%, CD8%, immunoglobulin (IgA) level, lymphocyte count, prior infections, prior zidovudine treatment, and age. In addition, we explored the combined effects of CD19% and IgG level in predicting time to infection. We found that a low CD19% is associated with a nonsignificant 1.2-fold increase in hazard of bacterial infection (95% confidence interval: 0.97, 1.49). In contrast, a high IgG level is associated with a nonsignificant 0.87-fold decrease in hazard of infection (95% confidence interval: 0.68, 1.12). CD4% was more prognostic of time to bacterial infection than CD19% or IgG level. Low CD19% and high IgG levels together lead to a significant (P < 0.01) 0.50-fold decrease in hazard (95% confidence interval: 0.35, 0.73) relative to low CD19% and low IgG levels. Similarly, in a model involving assay result changes (from baseline to 6 months) as well as baseline values, the effect of CD19% by itself is reversed from its effect in conjunction with IgG. In this model, CD19% that are increasing and high are associated with decreases in hazard of infection (P < 0.01), while increasing CD19% and increasing IgG levels are associated with significant (at the P = 0.01 level) fourfold increases in hazard of infection relative to stable CD19% and decreasing; stable, or increasing IgG levels. Our data suggest that CD19%, in conjunction with IgG level, provides a useful prognostic tool for bacterial infections. It is highly likely thai T-helper function impacts on B-cell function; thus, inclusion of CD4% in such analyses may greatly enhance the assessment of risk for bacterial infection.

Thymocyte development in Ah-receptor-deficient mice is refractory to TCDD-inducible changes

Hundeiker, C., Pineau, T., Cassar, G., Betensky, R. A., Gleichmann, E., & Esser, C. (n.d.).

Publication year

1999

Journal title

International Journal of Immunopharmacology

Volume

21

Issue

12

Page(s)

841-859
Abstract
Abstract
The arylhydrocarbon receptor (AhR), a ligand-activated transcription factor, is differentially distributed in tissues and abundant in the thymus epithelium. The activated AhR can induce the transcription of an array of genes, including genes of cell growth and differentiation. Neither the physiological function of the AhR nor its putative natural ligand is known. 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is a xenobiotic high-affinity activator of the AhR, and appears to be essential for most of the multifold toxic effects of TCDD. Activation of the AhR by even low doses of TCDD results in general immunosuppression and thymus hypoplasia. TCDD exposure interferes with thymocyte development; for instance, it reduces the proliferation rate of the very immature (CD4-CD8- and CD4-CD8+HSA+) thymocytes, leads to preferential emigration of very immature cells, and drastically skews the differentiation of thymocyte subpopulations towards mature CD4-CD8+ αβTCR(high) thymocytes. As shown here, in fetal thymi of AhR-deficient mice, thymocyte differentiation kinetics as defined by CD4 and CD8 surface markers, was comparable to AhR(+/+) C57BL/6 mice. Also, the cell emigration characteristics were similar to AhR(+/+) mice. These parameters were refractory to TCDD exposure in the AhR(-/-) mice, but not in the C57BL/6 mice. However, in AhR deficient mice at gestation day 15 more CD4-CD8- immature cells bore high amounts of the (αβ-T-cell receptor. Also, fetal thymocyte numbers were significantly lower, as compared to strain C57BL/6. Thus, the AhR is the mediator of thymotoxic effects of TCDD.

A boundary crossing probability for the bessel process

Betensky, R. A. (n.d.).

Publication year

1998

Journal title

Advances in Applied Probability

Volume

30

Issue

3

Page(s)

807-830
Abstract
Abstract
Analytic approximations are derived for the distribution of the first crossing time of a straight-line boundary by a d-dimensional Bessel process and its discrete time analogue. Themain ingredient for the approximations is the conditional probability that the process crossed the boundary before time m, given its location beneath the boundary at time m. The boundary crossing probability is of interest as the significance level and power of a sequential test comparing d + 1 treatments using an O Brien–Fleming (1979) stopping boundary (see Betensky 1996). Also, it is shown by DeLong (1980) to be the limiting distribution of a nonparametric test statistic for multiple regression. The approximations are compared with exact values from the literature and with values from a Monte Carlo simulation.

A boundary crossing probability for the Bessel process

Betensky, R. A. (n.d.).

Publication year

1998

Journal title

Advances in Applied Probability

Volume

30

Issue

3

Page(s)

807-830
Abstract
Abstract
Analytic approximations are derived for the distribution of the first crossing time of a straight-line boundary by a d-dimensional Bessel process and its discrete time analogue. The main ingredient for the approximations is the conditional probability that the process crossed the boundary before time m, given its location beneath the boundary at time m. The boundary crossing probability is of interest as the significance level and power of a sequential test comparing d + 1 treatments using an O'Brien-Fleming (1979) stopping boundary (see Betensky 1996). Also, it is shown by DeLong (1980) to be the limiting distribution of a nonparametric test statistic for multiple regression. The approximations are compared with exact values from the literature and with values from a Monte Carlo simulation.

Construction of a continuous stopping boundary from an alpha spending function

Betensky, R. A. (n.d.).

Publication year

1998

Journal title

Biometrics

Volume

54

Issue

3

Page(s)

1061-1071
Abstract
Abstract
Lan and DeMets (1983, Biometrika 70, 659-663) proposed a flexible method for monitoring accumulating data that does not require the number and times of analyses to be specified in advance yet maintains an overall Type I error, α. Their method amounts to discretizing a preselected continuous boundary by clumping the density of the boundary crossing time at discrete analysis times and calculating the resultant discrete-time boundary values. In this framework, the cumulative distribution function of the continuous-time stopping rule is used as an alpha spending function. A key assumption that underlies this method is that future analysis times are not chosen on the basis of the current value of the statistic. However, clinical trials may be monitored more frequently when they are close to crossing the boundary. In this situation, the corresponding continuous-time boundary should be used. Here we demonstrate how to construct a continuous stopping boundary from an alpha spending function. This capability is useful also in the design of clinical trials. We use the Beta-Blocker Heart Attack Trial (BHAT) and AIDS Clinical Trials Group protocol 021 for illustration.

Effect of shipment, storage, anticoagulant, and cell separation on lymphocyte proliferation assays for human immunodeficiency virus-infected patients

Weinberg, A., Betensky, R. A., Zhang, L. I., & Ray, G. (n.d.).

Publication year

1998

Journal title

Clinical and Diagnostic Laboratory Immunology

Volume

5

Issue

6

Page(s)

804-807
Abstract
Abstract
Lymphocyte proliferation assays (LPA), which can provide important information regarding the immune reconstitution of human immunodeficiency virus (HIV)-infected patients on highly active antiretroviral therapy, frequently involve shipment of specimens to central laboratories. In this study, we examine the effect of stimulant, anticoagulant, cell separation, storage, and transportation on LPA results. LPA responses of whole blood and separated peripheral blood mononuclear cells (PBMC) to different stimulants (cytomegalovirus, varicella-zoster virus, candida and tetanus toxoid antigens, and phytohemagglutinin) were measured using fresh specimens shipped overnight and frozen specimens collected in heparin, acid citrate dextrose (ACD), and citrate cell preparation tubes (CPT) from 12 HIV-infected patients and uninfected controls. Odds ratios for positive LPA responses were significantly higher in separated PBMC than in whole blood from ACD- and heparin-anticoagulated samples obtained from HIV-infected patients and from ACD-anticoagulated samples from uninfected controls. On separated PBMC, positive responses were significantly more frequent in fresh samples compared with overnight transportation for all antigens and compared with cryopreservation for the candida and tetanus antigens. In addition, viral antigen LPA responses were better preserved in frozen PBMC compared with specimens shipped overnight. CPT tubes yielded significantly more positive LPA results for all antigens, irrespective of the HIV patient status compared with ACD, but only for the candida and tetanus antigens and only in HIV- negative controls compared with heparin. Although HIV-infected patients had a significantly lower number of positive antigen-driven LPA responses compared with uninfected controls, most of the specimen processing variables had similar effects on HIV-positive and -negative samples. We conclude that LPA should be performed on site, whenever feasible, by using separated PBMC from fresh blood samples collected in either heparin or ACD. However, if on-site testing is not available, optimal transportation conditions should he established for specific antigens.

Multiple imputation for early stopping of a complex clinical trial

Betensky, R. A. (n.d.).

Publication year

1998

Journal title

Biometrics

Volume

54

Issue

1

Page(s)

229-242
Abstract
Abstract
It is desirable to have procedures available for stopping a clinical trial early if there appears to be no treatment effect. Conditional power procedures allow for early stopping in favor of the null hypothesis if the probability of rejecting H0 at the planned end of the trial given the current data and a value of the parameter of interest is below some threshold level. Lan, Simon, and Halperin (1982, Communications in Statistics C1, 207- 219) proposed a stochastic curtailment procedure that calculates the conditional power under the alternative hypothesis. Alternatively, predictive power procedures incorporate information from the observed data by averaging the conditional power over the posterior distribution of the parameter. For complex problems in which explicit evaluation of conditional power is not possible, we propose treating the problem of projecting the outcome of a trial given the current data as a missing data problem. We then complete the data using multiple imputation and thus eliminate the need for explicit calculation of conditional power. We apply this method to AIDS Clinical Trials Group (ACTG) protocol 118 and to several simulated clinical trials.

An examination of methods for sample size recalculation during an experiment

Betensky, R. A., & Tierney, C. (n.d.).

Publication year

1997

Journal title

Statistics in Medicine

Volume

16

Issue

22

Page(s)

2587-2598
Abstract
Abstract
In designing experiments, investigators frequently can specify an important effect that they wish to detect with high power, without the ability to provide an equally certain assessment of the variance of the response. If the experiment is designed based on a guess of the variance, an under-powered study may result. To remedy this problem, there have been several procedures proposed that obtain estimates of the variance from the data as they accrue and then recalculate the sample size accordingly. One class of procedures is fully sequential in that it assesses after each response whether the current sample size yields the desired power based on the current estimate of the variance. This approach is efficient, but it is not practical or advisable in many situations. Another class of procedures involves only two or three stages of sampling and recalculates the sample size based on the observed variance at designated times, perhaps coinciding with interim efficacy analyses. The two-stage approach can result in substantial oversampling, but it is feasible in many situations, whereas the three-stage approach corrects the problem of oversampling, but is less feasible. We propose a procedure that aims to combine the advantages of both the fully sequential and the two-stage approaches. This quasi-sequential procedure involves only two stages of sampling and it applies the stopping rule from the fully sequential procedure to data beyond the initial sample which we obtain via multiple imputation. We show through simulations that when the initial sample size is substantially less than the correct sample size, the mean squared error of the final sample size calculated from the quasi-sequential procedure can be considerably less than that from the two-stage procedure. We compare the distributions of these recalculated sample sizes and discuss our findings for alternative procedures, as well.

Conditional power calculations for early acceptance of H(O) embedded in sequential tests

Betensky, R. A. (n.d.).

Publication year

1997

Journal title

Statistics in Medicine

Volume

16

Issue

4

Page(s)

465-477
Abstract
Abstract
For ethical and efficiency concerns one often wishes to design a clinical trial to stop early if there is a strong treatment effect or if there is strong evidence of no treatment effect. There is a large literature to address the design of sequential trials for detecting treatment differences. There has been less attention paid to the design of trials for detecting lack of a treatment difference and most of the designs proposed have been ad hoc modifications of the traditional designs. In the context of fixed sample tests, various authors have proposed basing the decision to stop in favour of the null hypothesis, H(O), on conditional power calculations for the end of the trial given the current data. Here I extend this procedure to the popular sequential designs: the O'Brien-Fleming test and the repeated significance test. I derive explicit boundaries for monitoring the test statistic useful for visualizing the impact of the parameters on the operating characteristics of the tests and thus for the design of the tests. Also, they facilitate the use of boundary crossing methods for approximations of power. I derive appropriate boundaries retrospectively for two clinical trials: one that concluded with no treatment difference (AIDS Clinical Trials Group protocol 118) and one that stopped early for positive effect (Beta-Blocker Heart Attack Trial). Finally, I compare the procedures based on the different upper boundaries and assess the impact of allowing for early stopping in favour of H(O), in numerical examples.

Early stopping to accept H(o) based on conditional power: Approximations and comparisons

Betensky, R. A. (n.d.).

Publication year

1997

Journal title

Biometrics

Volume

53

Issue

3

Page(s)

794-806
Abstract
Abstract
It is intuitively appealing to clinicians to stop a trial early to accept the null hypothesis H0 if it appears that this will be the likely outcome at the planned end of the trial. We consider procedures that calculate at each time point the conditional probability of rejecting H0 at the end of the trial given the current data and some value of the parameter of interest. Lan, Simon, and Halperin (1982, Communications in Statistics C1, 207-219) calculate this probability under the design alternative, and Pepe and Anderson (1992, Applied Statistics 41, 181-190) use an alternative based solely on the current data. We investigate a modification to Pepe and Anderson's (1992) procedure that has a more satisfying interpretation. We define all of these procedures as formal sequential tests with lower stopping boundaries and study them in this context. This facilitates an improved understanding of the interplay of parameters by introducing visual displays, and it leads to an approximation for power by treating it as a boundary crossing probability. We use these tools to compare the performances of the different designs under a variety of parameter configurations.

Local estimation of smooth curves for longitudinal data

Betensky, R. A. (n.d.).

Publication year

1997

Journal title

Statistics in Medicine

Volume

16

Issue

21

Page(s)

2429-2445
Abstract
Abstract
Longitudinal data are commonly analysed using mixed-effects models in which the population growth curve and individual subjects' growth curves are assumed to be known functions of time. Frequently, polynomial functions are assumed. In practice, however, polynomials may not fit the data and a mechanistic model that could suggest a non-linear function might not be known. Recent, more flexible approaches to these data approximate the underlying population mean curve or the individual subjects' curves using smoothing splines or kernel-based functions. I apply the local likelihood estimation method of Tibshirani and Hastie and estimate smooth population and individual growth curves by assuming that they are approximately linear or quadratic functions of time within overlapping neighbourhoods. This method requires neither complete data, nor that measurements are made at the same time points for each individual. For descriptive purposes, this approach is easy to implement with standard software. Inference for the resulting curve is facilitated by the theory of estimating equations. I illustrate the methods with data sets containing longitudinal measurements of serum neopterin in an AIDS clinical trial, measurements of ultrafiltration rates of high flux membrane dialysers for haemodialysis, and measurements of the volume of air expelled by individuals.

Sequential analysis of censored survival data from three treatment groups

Betensky, R. A. (n.d.).

Publication year

1997

Journal title

Biometrics

Volume

53

Issue

3

Page(s)

807-822
Abstract
Abstract
In this paper, we propose a simple means of designing and analyzing a sequential procedure for comparing survival data from three treatments with the goal of eventually identifYing the best treatment. Our procedure consists of the concatenation of two sequential tests, as is suggested by Siegmund (1993, Annals of Statistics 21, 464-483) for instantaneous normal responses. The first sequential test is a global test that attempts to detect an overall treatment effect. If one is found, the least promising treatment is eliminated and a second sequential test attempts to identify the better of the two remaining treatments. Although there are three different information time scales to consider corresponding to each pairwise comparison, we show that under certain conditions they may be approximated by a single time scale. This enables us to gain insight into the problem of censored survival data from the more easily understood case of instantaneous normal data. Also, it eliminates the need for intensive computations and simulations for the design and analysis of the procedure.

An analysis of correlated multivariate binary data: Application to familial cancers of the ovary and breast

Betensky, R. A., & Whittemore, A. S. (n.d.).

Publication year

1996

Journal title

Journal of the Royal Statistical Society. Series C: Applied Statistics

Volume

45

Issue

4

Page(s)

411-429
Abstract
Abstract
The association between ovarian and breast cancer, both within and between family members, is examined using pooled data from five case-control studies. The occurrences of these diseases in sisters and mothers are analysed using a quadratic exponential model, which is an extension of the model of Zhao and Prentice for correlated univariate data. An advantage of this model is that the associations between pairs of diseases and pairs of relatives, which are of primary importance, are related to simple functions of its parameters. Also, the model applies to non-randomly sampled data, such as the case-control data, because it completely specifies the joint distribution of responses. A major weakness is that it is not immediately applicable to studies of families of different sizes. None-the-less, we find it to be useful under certain conditions, such as rare diseases. Our analysis of the data suggests that the risk of ovarian cancer is highly dependent on maternal history.

An O'Brien-Fleming sequential trial for comparing three treatments

Betensky, R. A. (n.d.).

Publication year

1996

Journal title

Annals of Statistics

Volume

24

Issue

4

Page(s)

1765-1791
Abstract
Abstract
We consider a sequential procedure for comparing three treatments with the goal of ultimately selecting the best treatment. This procedure starts with a sequential test to detect an overall treatment difference and eliminates the apparently inferior treatment if this test rejects the equality of the treatments. It then proceeds with a sequential test of the remaining two treatments. We base these sequential tests on the stopping boundaries popularized by O'Brien and Fleming. Our procedure is similar in structure to that used by Siegmund in conjunction with modified repeated significance tests. We compare the performances of the two procedures via a simulation experiment. We derive analytic approximations for an error probability, the power and the expected sample size of our procedure, which we compare to simulated values. Furthermore, we propose a modification of the procedure for the comparison of a standard treatment with experimental treatments.

Low-grade, latent prostate cancer volume: Predictor of clinical cancer incidence?

Whittemore, A. S., Keller, J. B., & Betensky, R. (n.d.).

Publication year

1991

Journal title

Journal of the National Cancer Institute

Volume

83

Issue

17

Page(s)

1231-1235
Abstract
Abstract
We hypothesize that each cell in lowgrade (Gleason grade 1-3) prostate cancer tissue is at risk of transformation into a cell which produces a highgrade (Gleason grade 4-5) clinical cancer after a short period of growth. As a consequence, the volume of low-grade, latent cancer tissue in the prostate glands of men at any age determines their incidence rate for high-grade, clinical cancer a few years later. Autopsy and incidence data for both white men and black men support this conclusion, with a tumor growth period of about 7 years. The transformation rate is similar for black men and for white men, about 0.024 high-grade cancers per year per cm3 of lowgrade latent cancer volume. Our hypothesis explains the infrequent occurrence of clinical cancer despite the high prevalence of latent cancer, the steep rise of clinical cancer incidence with age despite the slow rise of latent cancer prevalence with age, and the disparities in clinical cancer incidence among some populations despite their similar latent cancer prevalence. This hypothesis suggests that low-grade cancer volume is a critical determinant of clinical cancer risk. [J Natl Cancer Inst 83:1231-1235, 1991]

Actual versus ideal weight in the calculation of surface area: Effects on dose of 11 chemotherapy agents

Gelman, R. S., Tormey, D. C., Betensky, R., Mansour, E. G., Falkson, H. C., Falkson, G., Creech, R. H., & Haller, D. G. (n.d.).

Publication year

1987

Journal title

Cancer Treatment Reports

Volume

71

Issue

10

Page(s)

907-911
Abstract
Abstract
This study of 2382 breast, 182 rectal, 817 colon, and 351 lung cancer patients treated with combination chemotherapy on eight phase III Eastern Cooperative Oncology Group protocols indicates that 69% would receive a higher dose of at least one drug if surface area were calculated from actual weight rather than from the minimum of actual and ideal weight. Forty-eight percent of the patients would have at least a 10% increase in drug dose based on actual weight and only 8% would have at least a 25% increase in drug dose based on actual weight. Only on the premenopausal adjuvant breast cancer protocol and among women on the rectal adjuvant study do the differences in dose based on actual rather than ideal weight increase significantly with age. On the postmenopausal adjuvant breast study and on the lung cancer study, the differences in dose decrease significantly with age. For all age decades and both sexes within each protocol, the mean differences between dose based on actual and dose based on ideal weights were on the same order as the rounding factors for the 11 drugs studied. From the literature on the effects of doses of common chemotherapies on leukopenia, it appears that the percent of hematologic toxicity would not be raised to unacceptable levels by using actual weight to set doses.

Contact

rebecca.betensky@nyu.edu 708 Broadway New York, NY, 10003