Testlets bring several perks in the development and administration of tests, such as 1) the construction of meaningful test items, 2) the avoidance of non-relevant context exposure, 3) the improvement of testing efficiency, and 4) the progression of testlet items requiring higher thinking skills. Thus, the inclusion of testlets in educational assessments are prevailing. However, the violation of local item independence (LII) assumption because of the inclusion of testlet items is of paramount concern for the accuracy and precision of psychometric procedures, especially when unidimensional IRT models are mostly used in educational assessments.A Monte Carlo study was conducted to systematically investigate the impact on the accuracy and precision of psychometric procedures due to the various approaches to local item dependence (LID), which are the following. First, the 2 parameter-logistic (PL) model or 3PL model are under the ignoring approach in which no action is taken to handle LID. Second, the graded response model (GRM), the generalized partial credit model (GPCM), and the nominal response model (NRM) are under the aggregation approach in which testlet items’ scores are aggregated to handle LID. Third, the GRM is under the Response Pattern Coding (RPC) approach in which Testlet items’ unique response patterns are coded to handle LID (rpcGRM). Fourth, the BiFactor model and the testlet response theory (TRT) model are under the multidimensional approach in which a complex model is employed to handle LID. The current study compared the four approaches (i.e., ignoring, aggregation, RPC, and multidimensional) to LID in estimating (a) the item parameters for regular test items; (b) the individuals’ latent traits under EAP, and MAP scoring methods; (c) the empirical reliability under EAP, and MAP scoring methods; and (d) the classification agreement and classification accuracy indices under EAP, and MAP scoring methods. The results of the study showed that the inclusion of testlets greatly impacted the psychometric procedures, and the approaches to handle LID succeed merely in mitigating the problems. Additionally, the impact of the inclusion of testlets increased as the local item dependence levels, the proportion of testlet items, and the use of multiple-choice testlet items increased. Therefore, educational assessment developers should be cautious when including testlets in future test development procedures. The result of this study found that the RPC approach performed optimally in recovering both item and individual parameters and producing accurate empirical reliability and classification accuracy values when testlet items were multiple-choice items. However, when testlet items were non-multiple-choice items, the aggregation and the RPC approaches were the optimal choices followed by the multidimensional and the ignoring approaches consecutively.