Biostatistics Seminars

 

A series of Biostatistics Seminars typically take place at 13:00 on Thursdays (time and location may vary - see below). Anyone not in the Department of Health Sciences, University of Leicester is very welcome to attend at any point but please let us know by email beforehand in case we need to find a bigger room  (contact one of the seminar organisers: Prof. Alex Sutton ajs22@le.ac.uk or Prof. Nuala Sheehan nas11@le.ac.uk).

 

 

RECENT AND FUTURE SEMINARS

 

Spring Term 2017


March 30th, 2017 Jessica Barrett, Department of Public Health and Primary Care, School of Clinical Medicine, University of Cambridge

Dynamic risk prediction for cardiovascular disease

The 10-year risk of cardiovascular disease (CVD) is used to make clinical decisions about whether to prescribe lipid-lowering medication. Most CVD risk prediction tools use single measurements of CVD risk factors to estimate the 10-year risk. I will consider CVD risk prediction as a dynamic risk prediction problem, making use of repeated measurements of CVD risk factors to improve risk prediction. I will compare statistical methods for dynamic risk prediction, including joint modelling, where repeated measurements and time to CVD are modelled simultaneously, and landmarking, a two-stage approach where past repeated measurements are modelled separately from future CVD events. Predictive accuracy is assessed using measures of discrimination, such as the C-index, and calibration, such as the Brier Score.

 

*************************************************************************************************************************************

 

 

PAST SEMINARS

 

 

Autumn Term 2012

Thursday 18 Oct: Dr Susan Griffin, University of York, "Incorporating health inequality concerns into cost-effectiveness analysis to support decision making"

 

Thursday 15 Nov: Prof Simon Thompson, University of Cambridge, "What are the benefits and harms of breast cancer screening?  An independent review of the evidence."

 

Thursday 22 Nov: Prof David Collett, NHS Blood and Transplant, “Statistical issues in the allocation of livers for transplantation“

 

Abstract: Liver transplantation has become a well-established treatment for liver failure, but the shortage of suitable organs from deceased donors limits the number of transplants in the UK to less than 700 a year.  There is therefore a need for procedures that ensure that donated organs are allocated in a fair, transparent and unbiased manner.  New liver allocation schemes are being developed based on statistical models for survival with and without a transplant.  The development of these models needs to take account of informative censoring, and so methods for exploring the extent of informative censoring, and allowing for it in the modelling process, will be described.  The number of livers available for transplantation can also be increased by splitting a whole liver to provide for two transplants.  An analysis to compare outcomes following split liver transplantation with remaining on the waiting list for a whole liver will also be presented.

 

Spring Term 2013

Thursday 7 February:  Dr Jack Bowden, MRC Biostatistics Unit Cambridge. "Adaptive clinical trial designs incorporating mid-trial sample size adjustment".

 

Abstract: When designing a randomised controlled trial (RCT) to test the efficacy of a treatment in a chosen patient population, assumptions need to be made about the mean and spread of patient responses to treatment in order to derive an appropriate sample size. However, these assumptions may be subject to considerable uncertainty and, if their validity is not subsequently checked, could lead to a hopelessly under- or overpowered study. Adaptive designs incorporating sample size re-estimation offer a potential solution to this problem, by enabling interim patient data to be used to decide whether the initial assumptions were sensible and, if necessary, to alter the size and scope of the trial.  Unfortunately, the uptake of  adaptive designs such as this has been poor. This is due in part to fundamental concerns over their perceived validity or scientific rigor, especially if unblinding has occurred.

 

In this talk I will review a popular adaptive design method for RCTs with mid-trial sample size adjustment (that of Li et al. (Biostatistics 3:277--287) ). My initial motivation for this review was to provide a clear explanation to a funding body of a possible design alternative, after they had rejected an application for a standard (fixed sample size) trial.  I will describe the rationale for its use in this context and then discuss some extensions to the implementation of Li et al.'s approach that aim to make it an even more attractive and understandable alternative to a fixed sample size design.

 

 

 

Thursday 28 February: Dr Andrew Morris, Wellcome Trust Centre for Human Genetics, Oxford. "Trans-ethnic meta-analysis for discovery and fine-mapping of type 2 diabetes susceptibility loci".

 

Abstract: The detection of loci contributing effects to complex human traits, and their subsequent fine-mapping for the location of causal variants, remains a considerable challenge for the genetics research community. Meta-analyses of genome-wide association studies (GWAS), primarily ascertained from European-descent populations, have made considerable advances in our understanding of complex trait genetics, although much of their heritability is still unexplained. With the increasing availability of GWAS data from diverse populations, trans-ethnic meta-analysis may offer an exciting opportunity to detect novel complex trait loci and to improve the resolution of fine-mapping of causal variants through increased sample size and by leveraging differences in local linkage disequilibrium structure between ancestry groups. However, we might also expect there to be substantial genetic heterogeneity between diverse populations, both in terms of the spectrum of causal variants and their allelic effects, which cannot easily be accommodated through traditional approaches to meta-analysis. In this seminar, I will present novel methodology for trans-ethnic meta-analysis methodology that takes account of the expected similarity in allelic effects between the most closely related populations, while allowing for heterogeneity between more diverse ethnic groups. The MANTRA approach also facilitates fine-mapping by defining “credible sets” of variants that are most likely to be causal (or tagging an untyped causal variant). I will present an application of MANTRA to GWAS of type 2 diabetes in a total of 26,488 cases and 83,964 controls from European, East Asian, South Asian, and Mexican and Mexican American ancestry populations, highlighting the benefits of trans-ethnic meta-analysis for the discovery and characterisation of complex trait loci.

 

 

 

Thursday 14 March: Dr Rebecca Turner, MRC Biostatistics unit, Cambridge. "Making use of external information on heterogeneity and biases in meta-analysis".

 

Abstract: Many meta-analyses contain only a small number of studies, making it difficult to estimate the extent of between-study heterogeneity. An additional problem is that the original studies are often affected by varying amounts of internal bias caused by methodological flaws. Standard methods for meta-analysis do not acknowledge biases in the studies, and do not allow for imprecision in the estimated between-study heterogeneity variance. In this talk, I will present and discuss methods for incorporating empirical evidence on the likely extent of heterogeneity, and methods for making adjustments for anticipated within-study biases.

 

 

 

Summer Term 2013

 

 

Thursday 2 May: Dr. Richard Riley, University of Birmingham. "Meta-analysis of diagnostic and prognostic studies"

 

Abstract: In this talk I will discuss some of my recent research on meta-analysis for diagnostic and prognostic studies. In the first part, I will consider the issue of multiple thresholds. When meta-analysing diagnostic test accuracy studies, each study may provide results for one or more thresholds; however, the thresholds reported by each study often differ. In this situation researchers typically meta-analyse each threshold independently. Here, I rather consider jointly synthesising the multiple thresholds to gain more information. Two approaches are examined: a multivariate meta-analysis model and a linear imputation method. Both these can be followed by a meta-regression that constrains sensitivity/specificity estimates to decrease/increase as threshold value increases, thereby producing a clinically interpretable summary ROC curve. In the second part, I present a review of individual patient data (IPD) meta-analyses of prognostic factor studies, and evaluate how they are conducted and reported. IPD has long been considered the 'gold-standard' approach to meta-analysis, but I will show that it does not solve all the problems and much work is needed in the IPD field.

 

 

 

Thursday 13 June: Prof. Nigel Stallard , Warwick Medical School, University of Warwick. "Adaptive designs for confirmatory clinical trials with treatment selection and short-term endpoint information"

 

Abstract: Most statistical methodology for confirmatory phase III clinical trials focuses on the comparison of a control treatment with a single experimental treatment, with selection of this experimental treatment made in an earlier exploratory phase II trial.  Recently, however, there has been increasing interest in methods for adaptive seamless phase II/III trials that combine the treatment selection element of a phase II clinical trial with the definitive analysis usually associated with phase III clinical trials. Motivated by a study in Alzheimer's disease, this talk will describe a method for combining phases II and III in a single clinical trial.  The trial is conducted in stages with the most promising of a number of experimental treatments selected on the basis of data observed in the first stage continuing along with the control treatment to the second and any subsequent stages.  A statistical challenge arising in such a design is ensuring control of the overall type I error rate and the talk will explain how this can be enabled through an extension of the group-sequential approach. In some settings the primary endpoint can be observed only after long-term follow-up, so that at the time of the first interim analysis primary endpoint data are available for only a relatively small proportion of the patients randomised.  In this case if a suitable short-term endpoint exists it may be desirable to use data on this endpoint data along with any primary endpoint data available at the first interim analysis to inform treatment selection.  A new method will be presented that allows this whilst maintaining control of the overall type I error rate.  Simulation study results will be presented illustrating the gain in power from the use of such an approach.

 

 

 

Autumn Term 2013

 

 

Thursday 24 Oct: Prof Neil Pearce, London School of Hygiene and Tropical Medicine: "The analysis of variance and the analysis of causes"

 

Abstract: The methods, and ways of thinking about the health of populations, that will be required for epidemiology in the 21st century are in some instances quite different from the standard epidemiological techniques that are taught in most textbooks and courses today. As we develop epidemiological methods for addressing the scientific and public health problems of the 21st century, it is important that we consider, once again, the distinction between the analysis of variance and the analysis of causes. In this presentation, I first consider the statistical and scientific issues involved in the distinction between the analysis of variance and the analysis of causes. I then discuss some of the implications for regression modeling. Finally, I discuss some examples of the implications of this distinction for the theory and practice of epidemiology in a changing world, particularly with regards to risk factors that become ubiquitous over time.

 

 

 

Thursday 21 Nov: Dr Gary Collins, University of Oxford, "Issues in the external validation of multivariable prediction models"

 

Abstract: Multivariable prediction models are being published in ever increasing numbers, and some are now included in various clinical guidelines (e.g. NICE CG146, CG67).  They are typically used to estimate the probability that a specific outcome or disease is present (diagnostic setting) or a specific outcome will occur in the future (prognostic setting) in an individual.  Once a prediction model has been developed, it is essential that its performance be evaluated.  External validation studies describing an evaluation of model performance are far more relevant than those reporting its development.  Yet, with notable exceptions (e.g. Framingham risk score), external validation studies are infrequently carried out.

 

In this talk I will review and discuss some of the important issues to consider when validating prediction models.  I will present a review of published external validation studies and evaluate how they are conducted and reported.  I will highlight particular issues and shortcomings using examples from the literature supplemented with results from simulation studies.  During the talk I will describe better approaches to ensure external validation studies can be carried out enabling readers to critically appraise model’s predictive accuracy.

 

 

 

Thursday 12 Dec: Dr Kelvin Jordan, University of Keele, "Modelling patterns of morbidity and their management over time in primary care"

 

Abstract: Longitudinal studies in health research often use single baseline measurements of potential risk or prognostic factors to predict outcomes. Outcomes of such studies also tend to be measured on a limited number of follow-up points, often only once. Further, if outcomes are measured on more than one occasion, measurements tend to be spaced at long intervals apart.  For symptoms such as musculoskeletal pain which may be episodic and fluctuate in severity over short periods of time, single measurements may not accurately reflect an individual’s current symptom status and short term changes in their symptoms, and may not truly discriminate between patients. This seminar will look at the use of self-reported information collected at short and regular intervals to determine commons patterns of morbidity, discuss the use of routinely recorded primary care data as an alternative or as a complement to self-reported data, and in particular detail and give examples of the use of various forms of latent class analysis to derive patterns of morbidity over time.

 

 

Spring Term 2014

Thursday 23 Jan: Dr Stephen Burgess, Cambridge, "Using Mendelian randomization to disentangle the causal effects of lipid fractions"

 

Abstract: A conventional Mendelian randomization analysis assesses the causal effect of a risk factor on an outcome by using genetic variants solely associated with the risk factor of interest as instrumental variables. However, in some cases, such as for triglycerides as a cardiovascular risk factor, it may be difficult to find a relevant genetic variant that is not also associated with related risk factors, such as other lipid fractions. Such a variant is known as pleiotropic. In this talk, we propose an extension to Mendelian randomization to use multiple genetic variants associated with several measured risk factors to simultaneously estimate the causal effects of each of the risk factors on the outcome. We name the approach "factorial Mendelian randomization" by analogy to a factorial randomized trial. Methods for estimating the causal effects are presented and compared using real and simulated data, and the assumptions necessary for a valid factorial Mendelian randomization analysis are discussed. Subject to these assumptions, we demonstrate that triglycerides-related pathways have a causal effect on the risk of coronary heart disease independent of the effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol.

 

 

 

Thursday 13 Feb: Dr Nick Latimer, University of Sheffield, "A guide to adjusting survival time estimatesto account for treatment switching in randomised controlled trials".

 

Abstract: OBJECTIVES Treatment switching is a common issue in clinical trials of cancer treatments – often patients randomised to the control group are permitted to switch onto the experimental treatment at some point during follow-up.  In such circumstances an intention to treat (ITT) analysis will result in biased estimates of the overall survival advantage – and therefore the cost-effectiveness – associated with the experimental treatment.  Methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTA).We present an analytical framework to guide analysts on the use of methods to adjust for treatment switching in the context of economic evaluations.METHODS We conducted a review of methods used to adjust for treatment switching in HTA, and two rigorous simulation studies to assess the performance of adjustment methods in a range of realistic scenarios.  We tested different simulated trial sample sizes, crossover proportions, treatment effect sizes, levels of administrative censoring, and data generating models.  Combining the findings from our review and our simulation study, we made practical recommendations on the use of adjustment methods in HTA. RESULTS Our review demonstrates that adjustment methods make important limiting assumptions.  Our simulation studies show that the bias associated with alternative methods is highly associated with deviations from their assumptions.  Our recommended analysis framework aims to help researchers find suitable adjustment methods on a case-by-case basis.  The characteristics of clinical trials and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. CONCLUSIONS The limitations associated with switching adjustment methods mean that different methods are appropriate in different scenarios.  In some scenarios all methods may be prone to important bias.  The data requirements of adjustment methods have important implications for people who design and analyse trials which allow treatment switching.

 

 

Thursday 27 March: Dr Nic Timpson, University of Bristol, "Genetics in large scale population based epidemiology: just another omic?"

 

Abstract: Nic's work uses large population based collections of phenotypic and genotypic data to undertake association analyses and to also undertake applied epidemiological analyses which exploit the properties of genetic data to help causal inference. In this seminar he'll go over some of the applications and designs for applied methods, but will also introduce an ongoing project which has collected whole genome sequence data in samples from the UK (UK10K). He'll use the UK10K project and efforts linked to it to illustrate the potential gains from probing more sense collections of genetic data in the immediate future.

 

 

Summer Term 2014

Thursday 22 May: Dr Eleftheria Zeggini, Sanger Institute, "Next generation association studies for complex traits"

 

venue: Adrian LG 26

 

Abstract: The molecular changes leading to the pathogenesis of common diseases are still poorly understood. Here I will present work on population-scale whole genome sequencing in deeply-phenotyped cohorts to dissect the genetic architecture of medically-relevant traits. In the area of complex trait genomics, technology is in danger of outstripping our capacity to analyse and interpret the data. The field of statistical genetics has been actively developing tests tailored to the joint analysis of multiple rare variants. In addition, the study of these variants can be empowered by focusing on isolated populations, in which they may have increased in frequency and linkage disequilibrium tends to be extended. Substantial effort has also been invested in leveraging the characteristics of sub-Saharan African populations to identify and fine-map causal variants.

 

Thursday 19 Jun: Dr Neil Hawkins, London School of Higene and Tropical Medicine "The Many Faces of Multiplicity in HTA"

venue: Adrian LG26

Abstract: I will consider the many areas of Health Technology Assessment where the problem of multiplicity arises. Multiplicity arises when we have a number of candidate statistical analyses and we use of the same data set for model selection and model estimation. This can lead to both biased estimates and an under-estimate of the uncertainty in these biased estimates. Multiplicity can occur when we select relevant patient subgroups for analysis, structural forms and co-variable sets for statistical models, and data sets for evidence synthesis. I will review traditional solutions such as a requirement for pre-specification, traditional methods for correcting inference (such as the Bonferroni correction), and a requirement for biological plausibility, or simple regarding certain analyses as ‘exploratory’. I will then look at the potential role of wider evidence synthesis and Bayesian methods in addressing some of the limitations of traditional solutions. Importantly, Bayesian methods can potentially take account of both the decision process underlying the selection of candidate analyses and the mechanism by which the final analysis (for the purposes of decision-making) is selected.

 

 

 

Autumn Term 2014

Thursday 9 Oct: Dr Pedro Saramago Goncalves, University of York, "Synthesising individual-level data for decision making in healthcare: challenges from a set of case studies"

 

venue: Bennett Link Lower Ground Floor Lecture Theatre

 

Abstract: It is desirable to use all relevant sources of evidence to inform decision model parameters for use in cost-effectiveness analysis (CEA). The gold standard is to synthesise evidence from multiple individual patient level (IPD) sources. Network meta-analysis (NMA) has often been used in CEA as a tool for the simultaneous synthesis of information on more than two interventions. Although and until recently, NMA methods for IPD were scant or inexistent.
With the help of a collection of case studies this talk aims to present and discuss recent methodological contributions to the synthesis of IPD for CEA. Common across examples is the argument that it is better not to exclude any evidence from the analysis, irrespective of the format in which these data are available. The idea that IPD should always be used (even if only for a proportion of the evidence base) is reinforced, given that it improves parameter estimation of NMA models, particularly when adjusting for differences in patient-level covariates across comparisons and/or for baseline imbalances.
A Public Health case study is used to show that the use of IPD allows for a more suitable characterisation of decision uncertainty when estimating subgroup CEA, appropriately allowing also for subgroup value of information analysis. A second case study looks at the effectiveness of high compression treatments on the healing of venous leg ulcers. In this example it is shown that the use of IPD for time to event outcomes is particularly useful in guiding HTA decision making by allowing flexibility in the specification of more appropriate survival distributions and in dealing with potential existing study heterogeneity. Finally, a third example on the use of acupuncture for the treatment of chronic pain shows how IPD facilitated the synthesis of continuous outcomes when different outcome measures were reported across studies.

 

 

 

Thursday 30 Oct: Prof Lucinda Billingham, University of Birmingham, "Use of a Bayesian adaptive design for the multi-drug, genetic-marker-directed, non-comparative, multi-centre, multi-arm phase II National Lung Matrix Trial"

 

venue: Princess Road West (Health Sciences), G20

 

Abstract: Stratified medicine aims to tailor treatment decisions to individual patients, typically using molecular information to predict treatment benefit. The potential impact to benefit patients is considerable and recognised as strategically important. Cancer Research UK has made major investments into their Stratified Medicine Programme which provides a significant step in making targeted therapies available for people with cancer in the UK, with the National Lung Matrix Trial forming the next major phase in the agenda. The trial consists of a series of parallel, multi-centre, single-arm phase II trials, each arm testing an experimental targeted drug in a population stratified by multiple pre-specified target biomarkers. The trial uses a Bayesian adaptive design with the aim of determining whether there is sufficient signal of activity in any drug-biomarker combination to warrant further investigation.

 

Thursday 27 Nov: Tim Morris, MRC Clinical Trials Unit, UCL, "Re-randomisation of previous participants to clinical trials"

 

venue: Bennett Link Lower Ground Floor Lecture Theatre

 

Abstract: Patient recruitment is a major challenge to clinical trials. Reviews of publicly funded trials in the UK have shown that just 49–65% recruit to target, and of these many fail to do so on time. As a result it takes longer than expected to answer important questions, at greater expense, and answers may be less precise than was planned. In many disease areas, patients present with symptoms several times. A standard parallel-group design would regard such patients as ineligible to re-enrol, with adverse consequences for recruitment; a crossover trial would require that they agree up-front to a pre-defined number of follow-up periods. I propose designing trials that allow re-randomisation of previous participants but, unlike crossover trials, participants do not have to agree to a certain number of randomisations; they re-enrol when they need to. Initial objections to this are common, and centre on the implications for the analysis of some patients providing multiple observations to the data. I will show that if randomisation is done sensibly these objections are unfounded: the dependence of observations need not cause problems for the analysis. I will explain how and why valid inference are obtained in terms of bias and coverage, and give the conditions under which n observations including re-randomisations gives equivalent precision to n single randomisations. Simulation results confirm that, under departures from these conditions, re-randomisation has negligible losses in power. I will end by outlining some attractions of rerandomisation beyond patient recruitment, give some examples of trials already using the design, and offer a caution about when it is best avoided.

 

 

Spring Term 2015

Thursday 26 February: Dr Dan Jackson, MRC Biostatistics Unit Cambridge, "New models and estimation methods for network meta-analysis"

 

venue: Adrian LG26

 

Abstract: Network meta-analysis is becoming more popular as a way to analyse multiple treatments simultaneously and, in the right circumstances, rank treatments. In this talk I will present the design-by-treatment interaction model and I will explain why this provides an appropriate framework for network meta-analysis. I will also present my new model, which is a special case of the design-by-treatment interaction model and I will explain why this model is suitable for the routine analysis of network meta-analysis data. I will present a variety of methods for fitting this model, including Bayesian methods (both MCMC and importance sampling methods), REML and the DerSimonian and Laird's method of moments. I will present statistics that quantify the impact of the two variance components (heterogeneity and inconsistency) that comprise my model and I will discuss the use of study weights and borrowing of strength statistics. I will present the results for an example.
Parts of the various aspects of the work that I will present are in collaboration with Martin Law, Ian White, Julian Higgins, Georgia Salanti, Richard Riley, Malcolm Price, John Copas, Stephen Rice, Jessica Barrett, Rebecca Turner & Kirsty Rhodes.

 


 

Thursday 12 March: Dean Langan, University of York,"Handling heterogeneity in meta-analysis"

 

venue: Adrian LG26

 

Abstract: In meta-analyses, effect estimates from different studies usually vary above and beyond what would be expected by chance alone, due to inherent variation in the design and conduct of the studies. This type of variance is known as heterogeneity and is most commonly estimated using an approach described by DerSimonian & Laird (1986). Alternative methods to estimate the heterogeneity variance include proposals from Paule & Mandel and Hartung & Makambi, Sidik & Jonkman and an estimator derived from the restricted maximum likelihood approach. The aims of the presented research are to compare all methods and make clear recommendations for handling heterogeneity in a wide range of meta-analyses. First, this presentation compares the impact of using these methods on the results of 12,894 meta-analyses extracted from the Cochrane Database of Systematic Reviews. Results suggest I2 estimates can differ by more than 50% when different heterogeneity estimators are used. Second, the performance of heterogeneity variance estimators are compared using simulated meta-analysis data. Findings imply that using a single estimate of heterogeneity may lead to non-robust results in some meta-analyses and researchers should consider using alternatives to the DerSimonian and Laird method.

 

 

Thursday 26 March: Robert Grant, Kingston University & St George’s, University of London, "Can interactive online graphics help us communicate uncertainty?"

 

venue: Adrian LG26

 

Abstract: The last few years have seen an explosion of innovative data visualisations, particularly those that are interactive and delivered online. These have the potential to make our work have much greater impact but are a mystery to most statisticians. I will describe how I learned about these: how to design them and how to make them, in particular translating outputs to the D3 JavaScript library. No web design knowledge is required! I will reflect on the differences between the worlds of data and design, present some current experiments in representing uncertainty in more intuitive ways for a non-statistical audience, and propose priorities for future empirical research.

 

Summer Term 2015

 

Thursday 4 June, 1-2pm: Dr Jonathan Bartlett, London School of Hygiene and Tropical Medicine, "Multiple imputation of covariates in competing risks analysis"

venue: Bennett Lower Ground Floor Lecture Theatre 3

Abstract: Missing values are a common issue when analysing competing risks outcomes. We develop a multiple imputation approach for imputing missing covariates, based on the user specifying Cox proportional hazards models for each cause specific hazard function. The approach can accommodate missingness in multiple covariates, and takes into account interactions or non-linear covariates effects, if present in the cause specific hazard models. Results based on a simulation study will be presented, along with results based on a dataset of cause-specific mortality in Norway. Software in Stata and R implementing the approach will also be described.

Thursday 25 June, 1-2pm: Prof Richard Riley , University of Keele, "Multivariate meta-analysis: advantages, disadvantages & applications"

venue: Adrian LG26

Abstract: Meta-analysis is the statistical synthesis of results from related primary studies, in order to quantify a particular effect of interest. It is an immensely popular tool in evidence based medicine, and is used to summarise treatment effects, to identify risk/prognostic factors, and to examine the accuracy of a diagnostic test or prediction model, amongst many other applications. Many primary studies have more than one outcome of interest, such as the treatment effect on both disease-free survival and overall survival, and researchers usually meta-analyse each outcome separately. However, such multiple outcomes are often correlated. For example, a patient’s time to recurrence of disease is generally associated with their time of death. By meta-analysing each outcome independently, researchers ignore this correlation and thus lose potentially valuable information. As well as multiple outcomes, other correlated measures may also be of interest for the meta-analyst such as multiple treatment effects (e.g. A vs B, and A vs C), and multiple performance measures (e.g. sensitivity and specific, calibration and discrimination, etc). In this talk, I describe how multivariate meta-analysis models can jointly analyse multiple effects and account for their correlation. I discuss the statistical advantages and disadvantages of the approach over standard univariate methods (which analyse each outcome separately). In particular, I illustrate the gain in precision and borrowing of strength that the correlation can bring, which makes more of the data we already have and reduces issues such as outcome reporting bias. Though the talk will detail statistical issues, it is intended for a broad audience and especially for those interested in systematic reviews and meta-analysis. Real examples (mostly from the medical field) will be used to illustrate the potential application of multivariate meta-analysis, including multiple outcomes, multiple treatments (network meta-analysis), test accuracy, and risk prediction.

 

Autumn Term 2015

Thursday 24 Sep: Dr Karla Hemming, University of Birmingham, "The stepped wedge cluster randomised trial: rationale, design, analysis and reporting"

venue: Bennett Ground Floor Geology Dept Lecture Theatre 10

Abstract: The stepped-wedge cluster randomised trial (SW-CRT) is a novel research study design that is increasingly being used in the evaluation of service delivery type interventions. The SW-CRT is a type of randomised trial in which clusters are randomised to a date at which they initiate the intervention under evaluation. The design involves random and sequential crossover of clusters from control to intervention, until all clusters are exposed. It is a pragmatic study design which can reconcile the need for robust evaluations with political or logistical constraints. Whilst not exclusively for the evaluation of service delivery intervention it is particularly suited to evaluations that do not rely on individual patient recruitment. The SW-CRT design has the potential to be more powerful than the simple parallel cluster trial, in this talk however we show that contrary to popular opinion the efficiency depends on the intra cluster correlation (ICC) - i.e. the correlation between any two observations within the same cluster. Unless the clusters are very large, any increase in the ICC will tend to have a detrimental increase on the precision of the SW study. However, whereas the power available in a conventional cluster trial plateaus with increasing cluster size, this is not the case in the SW-CRT. In a SW-CRT more clusters are exposed to the intervention towards the end of the study than in its early stages. This implies that the effect of the treatment will be confounded with any underlying temporal trend. We show how sample size calculations and analysis must make allowance for both the clustered nature of the design and the confounding effect of time. Finally whilst an extension to the Consort statement does not yet exist for the SW-CRT, we outline some recommended additional reporting items. (collaborators A Girling and R Lilford)

 

Thursday 29 Oct: Dr Oriana Ciani, Università Commerciale L. Bocconi and University of Exeter, "Surrogate endpoints in health policy: friends or foes?"

venue: Henry Wellcome Ground Floor Frank and Katherine May Lecture Theatre

Abstract: The efficacy of medicines, medical devices, and other health technologies should be proved in trials that assess final patient-relevant outcomes such as survival or morbidity. However, market access and coverage decisions are often based on surrogate endpoints, biomarkers, or intermediate endpoints, which aim to substitute and predict patient-relevant outcomes that are unavailable due to methodological, financial, or practical constraints. This talk will provide a summary of the current use of surrogate endpoints in healthcare policy, discussing the case for and against their adoption and reviewing validation methods. A three-step framework for policy makers to handle surrogates will be introduced. It involves establishing the level of evidence, assessing the strength of the association, and quantifying relations between surrogates and final outcomes. Although use of surrogates can be problematic, they can, when selected appropriately, offer important opportunities for more efficient clinical trials and faster access to new health technologies that benefit patients and healthcare systems.

 

Friday 27 Nov: Dr Cosetta Minelli, Imperial College, "Bayesian prediction model combining data from studies on different sets of predictors"

venue: Adrian Lower Ground Floor Lecture Theatre LG26

Abstract: When developing risk prediction models using data from international consortia, studies with no information on one or more of the variables to be investigated are typically excluded from the analysis, thus reducing the sample size available. Our motivating example is a large EU-funded project aimed at developing a risk prediction model for COPD. The project includes a number of population-based studies, some of which providing data on only a subset of the life style, environmental and clinical potential predictors to be tested in the prediction model. Bayesian methods have been proposed that allow inclusion of studies with missing variables through imputation based on correlated variables present in both the complete and the incomplete studies. These use either a two-stage approach or a single joint model for imputing data and fitting the regression model to the imputed data. In this talk I will discuss these approaches, ways to evaluate their performance and feasibility of application in practice.

 

 

Spring Term 2016

Thursday 21 Jan: Dr Jack Bowden, MRC Biostatistics Unit Cambridge, "Weighing evidence `steam punk' style with the meta-analyzer"

venue: Adrian Lower Ground Floor Lecture Theatre LG26

 

Abstract: The funnel plot is a graphical visualisation of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modelling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the centre of mass of a physical system. We used this analogy, with some success, to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulae. In this talk I will describe the Science festival work and then attempt to formalise the physical analogy at a more technical level using the estimating equation framework: firstly, to help elucidate some of the basic statistical models employed in a meta-analysis and secondly, to forge new connections between bias adjustment in evidence synthesis and Mendelian randomization.

 

Thursday 17 Mar: Dr Danielle Burke, University of Keele, "Univariate and multivariate Bayesian meta-analysis with consideration of future trials and the choice of prior distributions".

venue:  George Porter Upper Ground Floor Lecture Theatre C

Abstract: In this talk I will discuss some of my recent research on Bayesian meta-analysis and the choice of prior distributions for between-study variance-covariance parameters. In the first part, I will consider how we can use a meta-analysis of results from multiple phase II trials to make decisions about proceeding to phase III, in both univariate and multivariate meta-analysis settings. In the second part, I discuss the choice of prior distributions in multivariate random-effects meta-analysis. In Bayesian meta-analyses of one outcome, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis there is little guidance about the choice of prior distribution for the additional between-study correlation, ρB; researchers often use a Uniform(-1,1) distribution assuming this is vague. I present the results of a simulation study and real examples to examine the use of various prior distributions for ρB within a Bayesian normal random-effects meta-analysis of two correlated outcomes. I show that the routine use of a Uniform(-1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague.

 

 

Summer Term 2016

Thursday 12 May: Dr Matthew Williams, Department of Surgery & Cancer, Imperial College London
and Prof Anthony Hunter, Department of Computer Science, University College London, 
"Aggregating evidence about the positive and negative effects of treatments using a computational model
of argument".

venue: Centre for Medicine, room 0.25/0.26

Abstract: Computational models of argument are being developed to capture aspects of how we can handle incomplete and inconsistent information through the use of argumentation. In this talk, we describe a novel approach to aggregating clinical evidence using a computational model of argument [1,2]. The framework is a formal approach to synthesizing knowledge from clinical trials involving multiple outcome indicators. Based on the available evidence, arguments are generated for claiming that one treatment is superior, or equivalent, to another. Evidence comes from randomized clinical trials, systematic reviews, meta-analyses, network analyses, etc. Preference criteria over arguments are used that are based on the outcome indicators, and the magnitude of those outcome indicators, in the evidence. Meta-arguments attack (i.e they are counterarguments to) arguments that are based on weaker evidence. An evaluation criterion is used to determine which are the winning arguments, and thereby the recommendations for which treatments are superior. We have compared our approach with recommendations made in NICE Guidelines, and we have used our approach to publish a more refined systematic review of evidence presented in a Cochrane Review [3]. Our approach has an advantage over meta-analyses and network analyses in that they aggregate evidence according to a single outcome indicator, whereas our approach combines evidence according to multiple outcome indicators.

[1] A Hunter and M Williams (2012) Aggregating evidence about the positive and negative effects of treatments, Artificial Intelligence in Medicine, 56:173-190
[2] A Hunter and M Williams (2015) Aggregation of Clinical Evidence using Argumentation: A Tutorial Introduction, Foundations of Biomedical Knowledge Representation, edited by Arjen Hommersom and Peter Lucas, LNCS volume 9521 , pages 317-337, Springer
[3] M Williams, Z. Liu, A.Hunter and F. MacBeth (2015) An updated systematic review of lung chemo-radiotherapy using a new evidence aggregation method, Lung Cancer 87(3):290-5

 

Thursday 16 Jun, 12:45 - 13:45: Dr Rhian Daniel, London School of Hygiene and Tropical Medicine, "Recent advances in causal mediation analysis"

venue: Centre for Medicine, room 0.25/0.26

Abstract: In diverse fields of empirical research, including many in the biological sciences, attempts are made to decompose the effect of an exposure on an outcome into its effects via different pathways. For example, it is well-established that breast cancer survival rates in the UK differ by socio-economic status. But how much of this effect is due to differential adherence to screening programmes? How much is explained by treatment choices? And so on. These enquiries, traditionally tackled using simple regression methods, have been given much recent attention in the causal inference literature, specifically in the fruitful area known as Casual Mediation Analysis. The focus has mainly been on so-called natural direct and indirect effects, with flexible estimation methods that allow their estimation in the presence of non-linearities and interactions, and careful consideration given to the need for controlling confounding. Despite these many developments, the estimation of natural direct and indirect effects is still plagued by one major limitation, namely its reliance on an assumption known as the "cross-world" assumption, an assumption so strong that no experiment could even hypothetically be designed under which its validity would be guaranteed. Moreover, the assumption is known to be violated when confounders of the mediator-outcome association are affected by the exposure, and thus in particular in settings that involve repeatedly measured mediators, or multiple correlated mediators. In this talk, I will discuss alternative mediation effects known as interventional direct and indirect effects, (VanderWeele et al, Epidemiology, 2014), and a novel extension to the multiple mediator setting. This is joint work with Stijn Vansteelandt, University of Gent. We argue that interventional direct and indirect effects are policy-relevant and show that they can be identified under much weaker conditions than natural direct and indirect effects. In particular, they can be used to capture the path-specific effects of an exposure on an outcome that are mediated by distinct mediators, even when, as often, the structural dependence between the multiple mediators is unknown. The approach will be illustrated using data on breast cancer survival.

Thursday 23 Jun: Dr Ian White, MRC Biostatistics Unit Cambridge, "Meta-analysis of non-linear dose-response relationships".

venue: Centre for Medicine, room 0.25/0.26

Abstract: Non-linear dose-response relationships, such as that between body mass index (BMI) and mortality, are common. Such relationships are best explored using continuous functions estimated from individual participant data from multiple studies. However, it is not obvious how the estimated dose-response relationships should be combined across studies. Sauerbrei and Royston proposed using multiple univariate meta-analyses of the log hazard ratio comparing an exposure level with a reference level. I have proposed instead combining the model parameters across studies using a multivariate meta-analysis. I will describe both approaches and illustrate them using data from the Emerging Risk Factors Collaboration on BMI, coronary heart disease (CHD) events and all-cause mortality (ACM) (> 80 cohorts, >18000 events). I will also show how ideas of borrowing of strength can be used to understand the multivariate method.

 

AutumnTerm 2016

 

Thursday 10 November: Prof Tomasz Burzykowski, University of Hasselt (Belgium)

Evaluation of surrogate endpoints: current practice and research topics

Abstract: A surrogate endpoint is intended to replace a clinical endpoint for the evaluation of new treatments when it can be measured more cheaply, more conveniently, more frequently, or earlier than that clinical endpoint. Before a surrogate can be used to replace a clinical endpoint in the drug development, its validity for this purpose has to be evaluated. The process of arriving at the definition of a “valid surrogate” has taken many years. In the presentation, various proposals for the definition will be reviewed. Related evaluation methods will be discussed and illustrated using real-life data from oncology. Current research topics will also be briefly reviewed.

venue: Centre for Medicine, room 0.25-0.26, University Road

 

Thursday 17 November: Dr Andrew Titman, University of Lancaster

Accounting for informative observation in event history analysis

Many observational studies into disease processes do not continually monitor patient status, but instead  only observe patients at clinic examination times. Standard analyses for such data usually assume that the examination times are non-informative of the disease process, meaning they are ignorable. However if clinic visits are potentially self-initiated and the disease is symptomatic, there is a risk that attendances will arise because a patient's condition has deteriorated leading to a form of selection bias. 

The talk will give an overview of existing approaches for dealing with informative observation times and the challenges to be overcome. There are analogies and connections with methods for missing data, but the methodology is less developed and informative observation is more often an issue of excess, rather than missing, observations. Particular focus will be given to methods for multi-state disease models. In particular, methods for diagnosing the presence of informative observation and joint modelling approaches when there may be a mixture of scheduled and patient-initiated observations will be presented.

venue: Centre for Medicine, room 0.25-0.26, University Road

 

Thursday 24 November: John Whittaker, GSK

Genomics in drug discovery

I will discuss the use of genomics in drug discovery at GSK, and particularly in the selection and validation of drug targets. I’ll highlight the interplay between experimental science and informatics, and will discuss recently published work highlighting the value of genetic information in selecting drug targets/indications. Motivated by this, I will also include a sketch of work ongoing and planned at Open Targets  (https://www.targetvalidation.org/), including both experimental and informatics aspects. I will focus on the key concepts and avoid technical detail, both regarding genomics and data science.

venue: Centre for Medicine, room 0.25, University Road

Share this page:

Survival Analysis For Junior Researchers 2017

SAfJR17 Logo

Details