Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Checking that models adequately present data is an essential component of applied statistical inference. Psychometricans increasingly use complex models to analyze test takers responses. The appeal of using complex cognitive diagnostic models (CDMs) is undeniable, as psychometricians can fit and build models that represent complex cognitive processes in the test while simultaneously controlling observation errors. With a trend toward diagnosing fine-grained skills that are responsible for test performance, both new methods and extensions of existing methods of assessing person fit in CDMs are required. Posterior predictive method (PP) is the most commonly used method in evaluating the effectiveness of person fit statistics in detecting aberrant response patterns in CDMs. It has been shown to be effective in detecting aberrant responses in IRT models but it is seldom implemented in cognitive diagnostic model. Additionally, two less known Bayesian model checking methods, prior predictive posterior simulation method (PPPS), pivotal discrepancy measure (PDM) will also be used to investigate the effectiveness of chosen person fit statistics. Three person fit statistics, log-likelihood statistic (l_z), un-weighted between-set index (UB) and response conformity index (RCI) are chosen in this study. In this study, I investigated the effectiveness of different Bayesian model checking methods in detecting aberrant response patterns with chosen discrepancy measures. The results from this study might help researchers answer the following two questions: (1) which discrepancy measure is more effective in detecting the aberrant response patterns under different model checking methods? (2) how well do the chosen discrepancy measures detect outlying response pattern? A simulate study was conducted to answer the above two questions. In terms of the data generation, it consists of two parts. One is for aberrant response pattern and the other is for normal response pattern. The normal response pattern is simulated from the DINA model with designated attribute parameters and each of four different aberrant response patterns was simulated by using binomial distribution with different assigned probabilities. The data was simulated and analyzed in R programming language and with Rjags package. Several interesting results can be drawn from my study: (1) increasing the test length did not improve the detection rates for each kind of aberrant response pattern. (2) Q-matrix complexity did not decrease the detection rate too much. (3) Generally speaking, loglikelihood statistic is the best measure in detecting each of different response pattern, especially for the cheating responses. (4) there isn’t too much performance difference of discrepancy measures under the PP and PPPS method. (5) Although the discrepancy measure RCI was developed in the context of cognitive diagnostic model (CDM), it had a poor performance in detecting each of the different aberrant responses.
Aberrant responses, Bayesian model checking, CDMs, Person fit analysis
Date of Defense
August 27, 2019.
Submitted Note
A Dissertation submitted to the Department of Educational Psychology and Learning Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Bibliography Note
Includes bibliographical references.
Advisory Committee
Russell Almond, Professor Directing Dissertation; Fred Huffer, University Representative; Betsy Becker, Committee Member; Insu Paek, Committee Member.
Publisher
Florida State University
Identifier
2019_Fall_Wang_fsu_0071E_15366
Wang, N. (2019). Bayesian Model Checking in Cognitive Diagnostic Models. Retrieved from http://purl.flvc.org/fsu/fd/2019_Fall_Wang_fsu_0071E_15366