Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Payrovnaziri, S. N. (2021). Explainable Artificial Intelligence for Predictive Modeling of Electronic Health Records in Patients with Cardiovascular Diseases. Retrieved from https://purl.lib.fsu.edu/diginole/2021_Fall_Payrovnaziri_fsu_0071E_16783
Predictive modeling can help researchers tackle challenges such as detection, prevention, and treatment for diseases with high mortality rates such as cardiovascular diseases. These diseases involve multiple risk factors that make traditional risk score systems insufficient for an accurate and effective outcome prediction especially in the context of analyzing large multimodal heterogeneous data. A significant increase in computational power has enabled researchers to investigate the benefits of applying artificial intelligence to predictive modeling for medicine and healthcare. Nevertheless, in medical practice, predictions made by a complex artificial intelligence-based model have to be validated by a human expert. Thus, the necessity of explainable artificial intelligence in medicine is increasing. This dissertation accomplishes four research objectives. First, a systematic scoping review of explainable artificial intelligence models based on real-world electronic health records data is produced. The necessity of explainable artificial intelligence for medicine requires such a review of the state-of-the-art research in the field. The main goal is to provide insights into the current research trend by categorizing machine learning methods, explainable artificial intelligence methods, and machine learning prediction tasks. Also, reproducibility and interpretability evaluation of the reviewed papers are discussed. Challenges and opportunities are presented from the perspective of medical professionals and potential gaps and issues in the field are discussed. The second objective entails enhancing the performance of predictive models. In the case of either intrinsic explainable artificial intelligence or post-hoc methods, the underlying predictive model should be accurate. Long-term and short-term mortality risks in patients with cardiovascular diseases are modeled using machine learning and deep learning methods. These models are built based on multimodal electronic health records data. Structured data (fixed and longitudinal) and unstructured data in the form of discharge summary notes are used. The predictive performance of models based on machine learning and deep learning methods on various data and architectures are compared. It is shown that incorporating a richer set of features enhances the predictive performance of deep models. The third objective is to investigate the impact of imputation techniques for handling missing values in electronic health records data on the prediction performance and interpretation results of the predictive models. Missingness is recognized as a major data quality issue of electronic health records and its impact on the interpretations of predictive models warrants more investigation. The performance of imputation methods on missing values as well as the impact of imputation methods on 1) the performance of predictive models and 2) the interpretations of predictive models are examined. It is observed that the imputation methods for handling missing values impact the performance and interpretation results of the predictive models. The fourth objective is focused on the interpretability enhancement of the predictive model based on longitudinal data. Few studies have investigated the application of Shapley additive explanations, a state-of-the-art explainable artificial intelligence method, on longitudinal medical data. Longitudinal laboratory test measurements and vital signs data is modeled using recurrent neural networks. Shapley additive explanation is applied to the model. Global and local interpretations of the model are presented. Also, a new representation is proposed to enhance the interpretation results of the explainable artificial intelligence method.
A Dissertation submitted to the School of Information in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Bibliography Note
Includes bibliographical references.
Advisory Committee
Zhe He, Professor Directing Dissertation; Xiuwen Liu, University Representative; Yolanda Rankin, Committee Member; Gregory Riccardi, Committee Member.
Publisher
Florida State University
Identifier
2021_Fall_Payrovnaziri_fsu_0071E_16783
Payrovnaziri, S. N. (2021). Explainable Artificial Intelligence for Predictive Modeling of Electronic Health Records in Patients with Cardiovascular Diseases. Retrieved from https://purl.lib.fsu.edu/diginole/2021_Fall_Payrovnaziri_fsu_0071E_16783