Modeling Heterogeneity in Bayesian Meta-Analysis
Al Amer, Fahad (author)
Lin, Lifeng (professor directing dissertation)
Liu, Xiuwen, 1966- (university representative)
Zhang, Xin (committee member)
Bradley, Jonathan R. (committee member)
Florida State University (degree granting institution)
College of Arts and Sciences (degree granting college)
Department of Statistics (degree granting department)
2021
text
doctoral thesis
Meta-analysis has been frequently used to combine findings from independent studies in many areas. Bayesian methods are an important set of tools for performing meta-analyses. They avoid some potentially unrealistic assumptions that are required by conventional frequentist methods. More importantly, meta-analysts can incorporate prior information from many sources, including experts' opinions and prior meta-analyses. Nevertheless, Bayesian methods are used less frequently than conventional frequentist methods, primarily because of the need for nontrivial statistical coding, while frequentist approaches can be implemented via many user-friendly software packages. This thesis divide into three parts, the first part aims at providing a practical review of implementations for Bayesian meta-analyses with various prior distributions. We present Bayesian methods for meta-analyses with the focus on a univariate meta-analysis of odds ratio for binary outcomes. We summarize various commonly used prior distribution choices for the between-studies heterogeneity variance, a critical parameter in meta-analyses. They include the inverse-gamma, uniform, and half-normal distributions, as well as evidence-based informative log-normal priors. Five real-world examples are presented to illustrate their performance. Under certain circumstances, Bayesian methods can produce markedly different results from those by frequentist methods, including a change in decision on statistical significance. When data information is limited, the choice of priors may have a large impact on meta-analytic results, in which case sensitivity analyses are recommended. Moreover, the algorithm for implementing Bayesian analyses may not converge for extremely sparse data; caution is needed in interpreting respective results. As such, convergence should be routinely examined. When select statistical assumptions made by conventional frequentist methods are violated, Bayesian methods provide a reliable alternative to perform a meta-analysis. A critical issue of meta-analysis is that the combined studies could be heterogeneous because of differences in study populations, design conducts, etc. Thus, heterogeneity is an issue that can affect the validity of meta-analytic results. In the existence of heterogeneity, a random-effects meta-analysis is considered an appropriate model to estimate a summary effect and a between-study variance. For several studies, conventional methods could be used to estimate the random-effects meta-analysis model, but the estimate of between-study heterogeneity could be imprecise in the case of few studies, and this imprecision is not acknowledged. In this situation, a Bayesian random-effects meta-analysis is advantageous in accounting for all sources of uncertainty. The second part of this thesis uses Bayesian methods to examine the impact of meta-analysis characteristics on between-study heterogeneity by analyzing data from a large collection of meta-analyses with binary outcomes. We classify and examine all included meta-analyses by outcome types and intervention comparison types. Then, based on these outcome and intervention classifications, we derive a set of predictive distributions for the extent of between-study variance expected in a future meta-analysis in different effect measures. These findings may help researchers to use them as informative prior distributions in new meta-analyses of binary outcomes. The last part of this thesis quantifies the between-study heterogeneity by using prediction intervals. By reporting the prediction interval, we show how often there is a contradiction in the conclusion compared to the conclusion based on the confidence interval. We also show the benefits of reporting the prediction interval in a meta-analysis, which has a straightforward clinical meaning that may guide researchers to expect the true effects in future settings. In addition, we use a large collection of meta-analyses from the Cochrane Database of Systematic Reviews to evaluate the real-world performance of the prediction interval. We assess whether the previous meta-analysis successfully predicts a future study or a study not included in the meta-analysis based on publication years. This assessment is illustrated using a case study. In the existence of heterogeneity, a random-effects meta-analysis is considered an appropriate model to estimate a summary effect and a between-study variance. For many studies, conventional methods could be used to estimate the random-effects meta-analysis model, but the estimate of between-study heterogeneity could be imprecise in the case of few studies as well as this imprecision is not acknowledged. In this situation, a Bayesian random-effects meta-analysis is advantageous in accounting for all sources of uncertainty. The second part of this thesis uses Bayesian methods to examine the impact of meta-analysis characteristics on between-study heterogeneity by analyzing data from a large collection of meta-analyses with binary outcomes. We classify and examine all included meta-analyses by outcome types and intervention comparison types. We derive a set of predictive distributions for the extent of between-study variance expected in a future meta-analysis in various effect measures. These findings may help researchers to use them as informative prior distributions in new meta-analyses of binary outcomes. The last part of this thesis quantifies the between-study heterogeneity by using prediction intervals. By reporting the prediction interval, we show how often there is a contradiction in the conclusion compared to the conclusion based on the confidence interval. We also show the benefits of reporting the prediction interval in a meta-analysis, which has a straightforward clinical meaning that may guide researchers to expect the true effects in future settings. In addition, we assess whether the previous meta-analysis successfully predicts a future study or a study not included in the meta-analysis based on publication years. This assessment is illustrated using a case study.
July 2, 2021.
A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Includes bibliographical references.
Lifeng Lin, Professor Directing Dissertation; Xiuwen Liu, University Representative; Xin Zhang, Committee Member; Jonathan Bradley, Committee Member.
Florida State University
2021_Fall_AlAmer_fsu_0071E_16632