Next Article in Journal
Extended Newton–Kantorovich Theorem for Solving Nonlinear Equations
Previous Article in Journal
Some Asian Women Pioneers of Chemistry and Pharmacy

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Properties of Estimators of the RMSD Item Fit Statistic

1
IPN—Leibniz Institute for Science and Mathematics Education, Olshausenstraße 62, 24118 Kiel, Germany
2
Centre for International Student Assessment (ZIB), Olshausenstraße 62, 24118 Kiel, Germany
Foundations 2022, 2(2), 488-503; https://doi.org/10.3390/foundations2020032
Received: 29 April 2022 / Revised: 25 May 2022 / Accepted: 2 June 2022 / Published: 6 June 2022

Abstract

:
In this article, statistical properties of the root mean square deviation (RMSD) item fit statistic in item response models are studied. It is shown that RMSD estimates will indicate even misfit for items whose parametric assumption of the item response function is correct (i.e., fitting items) if some item response functions in the test are misspecified. Moreover, it is demonstrated that the RMSD values of misfitting and fitting items depend on the proportion of misfitting items. We propose three alternative bias-corrected RMSD estimators that reduce the bias for fitting items. However, these alternative estimators provide slightly negatively biased estimates for misfitting items compared to the originally proposed RMSD statistic. In the numerical experiments, we study the case of a misspecified one-parameter logistic item response model and the behavior of the RMSD statistic if differential item functioning occurs.

1. Introduction

Item response theory (IRT) models [1] are an important class of statistical models for the analysis of multivariate binary random variables (i.e., dichotomous variables). IRT models can be regarded as a factor-analytic multivariate technique to summarize a high-dimensional contingency table by a few latent factor variables of interest. Of particular interest is the application of IRT models in educational large-scale assessment studies (LSA; [2]) like the programme for international student assessment (PISA; [3]) that summarize the ability of students on test items in different cognitive domains. This article focuses on unidimensional IRT models that involve a unidimensional latent variable used for describing the discrete multivariate data. Moreover, we only consider dichotomous items, although LSA studies typically involve dichotomous and polytomous items.
Let $X = ( X 1 , … , X I )$ be the vector of I dichotomous items $X i ∈ { 0 , 1 }$. There are $C = 2 I$ different realizations for the multivariate variable $X$. A unidimensional IRT model [4,5] is a statistical model for the probability distribution $P ( X = x )$ for $x ∈ { 0 , 1 } I$, where
$P ( X = x ) = ∫ ∏ i = 1 I P i ( θ ) x i 1 − P i ( θ ) 1 − x i f ( θ ) d θ and$
f is a univariate density function. In the rest of the article, we fix this distribution to be standard normal, but this can be weakened [6,7,8]. The functions $P i ( θ ) = P ( X i = 1 | θ )$ are denoted as item response functions (IRF).
It is important to note that in (1), item responses $X i$ are conditionally independent of $θ$. This means that after controlling the latent ability $θ$, pairs of items $X i$ and $X j$ are conditionally uncorrelated. This local independence assumption can be statistically tested [5,9].
In most cases, a parametric model is utilized to estimate the IRF appearing in (1). In more detail, for each item, a parametric IRF $P i * ( θ ; γ i ) ≃ P ( X i = 1 | θ )$ is assumed. The vectors of item parameters $γ i$ are estimated in the IRT model. The one-parameter logistic (1PL) model (also referred to as the Rasch model; [10]) employs the IRF $P i ( θ ) = Ψ ( a ( θ − b i ) )$, where $Ψ$ is the logistic link function, $b i$ is the item difficulty of item i, and a is the common item discrimination. Note that a can be alternatively set to 1, and the standard deviation of the trait $θ$ is estimated. As an alternative, the two-parameter logistic (2PL) model [11] is also frequently used in practice. The 2PL model employs the IRF $P i ( θ ) = Ψ ( a i ( θ − b i ) )$ and has two item-specific parameters. In contrast to the 1PL model, item discriminations are allowed to be item-specific.
Typically, the parametric assumption will be a (slight) misspecification of the true IRT model (1). That is, the multivariate vector $X$ is represented by:
$P ( X = x ) ≃ ∫ ∏ i = 1 I P i * ( θ ; γ i ) x i 1 − P i * ( θ ; γ i ) 1 − x i f ( θ ) d θ .$
In practical applications, it can be hoped that the approximation of $P i$ by $P i *$ is good enough because the shape of the IRF is used for describing and selecting items in an educational test.
The parameters $γ i$ of the estimated IRFs in Equation (1) can be estimated by (marginal) maximum likelihood using an expectation-maximization algorithm [12,13,14]. In practice, the integral in (2) can be approximated by fixed (rectangular) quadrature integration. If a standard normal density f is used, a quadrature grid of 21 or 41 equidistant $θ$ points between $− 6$ and 6 is often used in software implementations.
The assessment of the adequacy of parametric IRFs (i.e., item fit; [15,16]) is an active field in psychometric research. The main idea is to assess the discrepancy between a true IRF $P i$ and the assumed parametric IRF $P i *$. Of vital interest is to find those misfitting items i for which the assumed IRFs $P i *$ are seriously incorrect. In these cases, different functional forms of the IRF might be used, or item $X i$ might be deleted from further analysis. In this article, we study the statistical behavior of the root mean square deviation (RMSD; [17,18,19,20]) item fit statistic. It is shown that a misfit for some items also affects the item fit assessment of the fitting items because the misspecified IRT model allocates the misspecification of one item to other fitting items. Moreover, we demonstrate that the expected value of the RMSD statistic depends on the sample size. To circumvent this obstacle, three alternative bias-corrected estimators of the RMSD statistic are investigated.
The rest of the article is structured as follows. In Section 2, the RMSD statistic is introduced and a few population and finite-sample properties are presented. Section 3 proposes three alternative bias-corrected RMSD estimators. In Section 4, four numerical experiments are carried out in order to compare the performance of the original RMSD estimators with the bias-corrected RMSD alternatives. Moreover, we also study the behavior of the population RMSD value as a function of the proportion of misfitting items. Finally, the paper closes with a discussion in Section 5.

2. RMSD Item Fit Statistic

In this section, we introduce the RMSD item fit statistic. The item fit can be defined as the discrepancy between $P i$ and $P i *$. In practice, the parametric IRFs $P i *$ are obtained, but the true IRFs $P i$ can be nonparametrically defined and are not directly accessible. Nevertheless, one can define:
$RMSD i , pseudo − true = ∫ [ P i ( θ ) − P i * ( θ ; γ i ) ] 2 f ( θ ) d θ .$
For a fitted IRT model with a parametric assumption $P i *$ (see Equation (2)), the involved true but unknown IRFs $P i$ must be replaced by some estimate. As already mentioned in Section 1, the estimation of the IRT model relies on evaluating the integral in (2) on a grid $θ 1 , … , θ T$ of T quadrature points for the ability variable $θ$. Hence, all involved integrations in model fitting and item fit assessment will be replaced by summations that involve the finite grid of quadrature points.
As pointed out by an anonymous reviewer, the RMSD statistic in (3) is only designed to detect misfit in the functional form of the IRF. The RMSD is insensitive to detecting violations from the local independence assumption and unidimensionality. However, the RMSD can be effectively utilized for studying differential item functioning (see Section 4.3).
For I dichotomous items $X 1 , … , X I$, there are $C = 2 I$ different item response patterns. For a vector $x p = ( x p 1 , … , x p I ) ∈ { 0 , 1 } I$, we define the index p of an item response pattern by $p = ∑ i = 1 I 2 i − 1 x p i$. Hence, we can associate the vector of item responses with item response patterns. According to the local independence assumption, we can compute the individual likelihood function for pattern p based on true or assumed parametric IRF, respectively, by:
$f p | t = P ( X = x p | θ t ) = ∏ i = 1 I P i ( θ t ) x p i 1 − P i ( θ t ) 1 − x p i and$
$f p | t * = ∏ i = 1 I P i * ( θ t ) x p i 1 − P i * ( θ t ) 1 − x p i for t = 1 , … , T .$
In Equations (1) and (2), the normal distribution is typically fixed. Hence, values of the density f evaluated at the discrete quadrature grid are known as $f t = P ( θ = θ t )$ with $∑ t = 1 T f t = 1$. Note also that the data-generating model (1) can be rewritten by replacing the integration with summation as
$w p = P ( X = x p ) = ∑ t = 1 T f p | t f t .$
Clearly, it also holds that $∑ p = 1 C w p = 1$.
The estimation of the unknown IRF $P i$ in Equation (3) is based on individual posterior distributions [20,21,22]. For each pattern p and each quadrature point $θ t$. the posterior distribution $h t | p *$ is given by
$h t | p * = f p | t * f t ∑ u = 1 T f p | u * f u = f p | t * f t w p * ,$
where $w p * = ∑ t = 1 T f p | t * f t$. Finally, the observed IRF $P i , obs$ as an estimate of $P i$ is defined by
$P i , obs ( θ t ) = ∑ p = 1 C w p h t | p * x p i ∑ p = 1 C w p h t | p * .$
Then, the RMSD statistic from (3) can be rewritten as:
$RMSD i = ∑ t = 1 T [ P i , obs ( θ t ) − P i * ( θ t ; γ i ) ] 2 f t .$
The RMSD statistic in Equation (9) refers to a population value because the probabilities $w p$ of item response patterns are known. For sample data, observed frequencies $w ^ p$ instead of $w p$ are used for defining an estimate of the true IRF. This estimate is given by:
$P ^ i , obs ( θ t ) = ∑ p = 1 C w ^ p h t | p * x p i ∑ p = 1 C w ^ p h t | p * .$
A sample-based RMSD statistic is then defined as:
$RMSD ^ i = ∑ t = 1 T [ P ^ i , obs ( θ t ) − P i * ( θ t ; γ i ) ] 2 f t .$
Note that the item parameter $γ i$ in (11) might be known or unknown.
The RMSD fit statistic has broad applicability in educational assessment [21,23,24,25,26]. It is primarily as an effect size of item misfit [15,27] and RMSD values larger than 0.05 or 0.08 might be a viable violation of the parametric IRF assumption [19,22,28,29,30]. The RMSD item fit statistic bears similarity to residual-based test statistics developed by Haberman and colleagues [15,31,32,33]. Related research based on residual statistics can be found in [26,34].

2.1. Unbiasedness of the Population Value of the RMSD Statistic for a Correctly Specified IRT Model

We now show unbiasedness of the population RMSD statistic (see Equation (9)) if the IRT model is correctly specified. In this case, we have $P i ( θ t ) = P i * ( θ t ; γ i )$ for all $i = 1 , … , I$, $f p | t = f p | t *$ for all $t = 1 , … , T$, and $w p = w p *$ for all $p = 1 , … , C$. The finding has also been presented by [32]. We only have to show that $P i ( θ t ) = P i , obs ( θ t )$. We analyze the numerator and the denominator of $P i , obs$ in (8). For the numerator of $P i , obs$, we get:
$∑ p = 1 C w p h t | p * x p i = ∑ p = 1 C w p f p | t f t w p x p i = ∑ p = 1 C f t f p | t x p i = f t P i ( θ t ) .$
For the denominator of $P i , obs$, we receive:
$∑ p = 1 C w p h t | p * = ∑ p = 1 C w p f p | t f t w p = f t .$
Hence, we get $P i , obs ( θ t ) = P i ( θ t )$. If the IRT model is correctly specified, the RMSD population value is zero, and we get unbiasedness.

2.2. Population RMSD Statistic for Misspecified IRT Models

Now, we derive the population value of the RMSD statistic if the IRT model is misspecified. This means that the assumed parametric IRF $P i *$ differs from the true data-generating IRF $P i$. Consequently, it follows that $f p | t * ≠ f p | t$. Define $e p | t * = f p | t * − f p | t$. We now study the numerator and the denominator of $P i , obs$ in Equation (8). For the numerator, we get
$∑ p = 1 C w p h t | p * x p i = ∑ p = 1 C x p i f p | t * f t w p * − 1 ∑ u = 1 T f p | u f u = ∑ p = 1 C x p i f p | t * f t w p * − 1 ∑ u = 1 T ( f p | u * − e p | u * ) f u = f t P i ( θ t ) − f t ∑ p = 1 C x p i f p | t * e p * ,$
where $e p * = w p * − 1 ∑ t = 1 T e p | t * f t$. Similar calculations for the denominator result in
$∑ p = 1 C w p h t | p * = f t − f t ∑ p = 1 C f p | t * e p * .$
Hence, the observed IRF $P i , obs$ can be determined as:
$P i , obs ( θ t ) = P i ( θ t ) − ∑ p = 1 C x p i f p | t * e p * 1 − ∑ p = 1 C f p | t * e p * .$
By applying a Taylor expansion of (16) and ignoring higher-order terms, we get:
$P i , obs ( θ t ) ≃ P i ( θ t ) − ∑ p = 1 C ( x p i − P i ( θ t ) ) f p | t * e p * .$
Notably, misspecified IRFs enter $e p | t *$, which subsequently enter the $e p *$ terms in (17). Interestingly, the observed IRF of fitting items (i.e., $P i = P i *$) will also be typically biased if there are some misfitting items in the test. Therefore, the RMSD statistic for fitting items will be larger than zero. It is unclear how Equation (17) affects the RMSD population values for misfitting items. In our experience from empirical applications, the RMSD value for misfitting items will be much smaller than the pseudo-true RMSD value defined in Equation (3) (see [21]).

2.3. On the Positive Bias of the Sample-Based RMSD Statistic

We now show that the expected value of the sample-based RMSD statistic $RMSD ^ i$ is typically larger than the population RMSD statistic. The reason is that we now use observed frequencies $f ^ p$ instead of item response pattern probabilities $f p$ in the computation of the estimated observed IRF $P ^ i , obs$. We obtain by applying a multivariate Taylor expansion of first order
$P ^ i , obs ( θ t ) = ∑ p = 1 C w ^ p h t | p * x p i ∑ p = 1 C w ^ p h t | p * ≃ P i , obs ( θ t ) + ∑ p = 1 C h t | p * x p i ∑ q = 1 C w q h t | q * − ∑ q = 1 C w q h t | q * x q i h t | p * ∑ q = 1 C w q h t | q * 2 ( w ^ p − w p ) .$
We can simplify (18) to
$P ^ i , obs ( θ t ) ≃ P i , obs ( θ t ) + ∑ p = 1 C h t | p * ∑ q = 1 C w q ( x p i − x q i ) h t | q * ∑ q = 1 C w q h t | q * 2 ( w ^ p − w p ) .$
Therefore, we can write
$P ^ i , obs ( θ t ) ≃ P i , obs ( θ t ) + e i ( θ i ) ,$
where $e i ( θ t )$ is the second term after the ≃ sign in (19) and has an expected value of zero. Hence, we get an expected value of the square of the sample-based RMSD statistic of
$E RMSD ^ i 2 = RMSD i 2 + ∑ t = 1 T E e i ( θ t ) 2 f t .$
As a consequence of (21), sample-based estimates of the RMSD statistic typically turn out to be larger on average than their population-based counterparts.

3. Bias-Corrected RMSD Estimators

In Section 2.3, it was shown that the sample-based RMSD statistic is positively biased. Three alternative RMSD estimators are proposed in the following two subsections to eliminate the bias partially.

3.1. Analytical Bias Correction

At first, we discuss an analytical bias correction (abc) of the RMSD statistic that has been implemented for about five years in the R [35] package CDM [36,37]. The bias correction relies on the idea that there is a long test (i.e., I is sufficiently large), and there is no bias in the estimation of $θ$ for each person in the population. This means that a person with a data-generating ability value $θ t$ is concentrated at $θ t$ in its individual posterior distribution $h t | p *$. Let the observed IRF be estimated as $P ^ i , obs ( θ t )$. The estimated variance of $P ^ i , obs ( θ t )$ is given by:
$E P ^ i , obs ( θ t ) − P i , obs ( θ t ) 2 = V i t = N − 1 P ^ i , obs ( θ t ) ( 1 − P ^ i , obs ( θ t ) ) ,$
where N denotes the sample size. Hence, the total amount of expected bias in $RMSD ^ i 2$ is given by:
$B i , abc = ∑ t = 1 T f t V i t .$
This term must be subtracted from the square of the original RMSD statistic $RMSD ^ i 2$. To prevent from taking the square root of negative numbers, we define the square root of a positive part by:
$sqrt + ( x ) = max ( x , 0 ) .$
Finally, we can define the analytical bias-corrected RMSD estimator as
$RMSD ^ i , abc = sqrt + RMSD ^ i 2 − B i , abc .$
Due to the definition of (25), the analytical bias-corrected RMSD will always be smaller than the original RMSD statistic.

3.2. Bootstrap and Jackknife Bias Correction

As an alternative to analytical bias corrections of the RMSD statistic, we also investigate bias corrections by bootstrap and jackknife [38,39]. As in Section 3.1, we conduct a bias correction of the square of the RMSD statistic and take the square root for the RMSD calculation afterward. This approach differs from [21], which performed nonparametric bootstrap for bias correction based on the non-squared RMSD statistic. The performance of this approach was not entirely satisfactory. Moreover, we also aimed to compare our proposed resampling-based bias correction with the analytical bias correction.
For the bootstrap bias correction (bbc), we draw a sample of N persons with replacement. The average of the squared RMSD statistic of bootstrap samples will typically be larger than the square of the original RMSD statistic, in particular in small samples. Let $MSD ¯ i , bbc$ be the average of the square of RMSD statistics across bootstrap estimates. The bias term in the square of the RMSD statistic can be determined by
$B i , bbc = MSD ¯ i , bbc − RMSD ^ i 2 .$
Hence, the value $B i , bbc$ can be used for a bias correction of the RMSD (see [38]). In more detail, we define the bootstrap bias-corrected RMSD statistic by
$RMSD ^ i , bbc = sqrt + RMSD ^ i 2 − B i , bbc .$
Similarly, a bias term can also be determined by the jackknife method. In this case, the sample of N persons is divided into J parts. For each jackknife sample, the analysis is repeated by excluding the j-th part ($j = 1 , … , J$) from the dataset. Let $MSD ¯ i , jbc$ be the average of the square of RMSD statistics across jackknife estimates. The bias term $B i , jbc$ using the jackknife method can be computed as:
$B i , jbc = ( J − 1 ) MSD ¯ i , jbc − RMSD ^ i 2 .$
Then, the jackknife bias-corrected RMSD statistic can be defined as:
$RMSD ^ i , jbc = sqrt + RMSD ^ i 2 − B i , jbc .$

4. Numerical Experiments

The following numerical examples pursue several goals. First, we want to investigate the statistical behavior of the original and bias-corrected RMSD estimators. Second, we are interested in comparing population RMSD values for tests with different amounts of misfitting items. In particular, we are interested in the discrepancy between the pseudo-true RMSD statistic and the population RMSD statistic.

4.1. Study 1: Correctly Specified IRT Model

We assume a correctly specified 1PL IRT model in the first numerical experiment. The test consists of $I = 9$ items. We use three items that are three times duplicated with item difficulties $b i$ of $− 1.0$ (Items 1,4,7), $0.5$ (Items 2,5,8), and $2.0$ (Items 3,6,9). The common item discrimination a was set to 1.0, and a normal distribution was assumed for generating item responses. We varied sample sizes N in five levels: 125, 250, 500, 1000, and 2000 and utilized 1000 replications in each cell of this simulation study. Note that the population RMSD values (see Equation (9)) equal zero because the IRT model is correctly specified (see Section 2.1). Hence, there are no misfitting items in this Study 1. For finite sample sizes, we compute the original RMSD estimator (see Equation (9) in Section 2), the RMSD estimator based on analytical bias correction (abc; see Equation (25) in Section 3.1), the RMSD estimator based on bootstrap bias correction (bbc; see Equation (27) in Section 3.2), and the RMSD estimator based on jackknife bias correction (jbc; see Equation (29) in Section 3.2). A total number of 200 bootstrap samples and 50 jackknife samples were used, respectively. The same numbers were also used in the other numerical experiments in this section. For computing the sample-based RMSD estimators, we assumed that the item parameters were known.
In Table 1, the means, standard deviations, and root mean square errors (RMSE) of the four different RMSD estimators are compared as a function of sample size. Because the population RMSD value equals zero, it is evident that the RMSD estimators are positively biased. Notably, the bias is larger for smaller samples. It can be seen that the bias decreases in larger samples and approaches zero in infinite samples. Moreover, the bias-corrected RMSD estimators satisfactorily reduced the bias. The bias was smaller for the resampling-based estimates (bbc and jbc) than the RMSD estimator based on the analytical bias correction (abc). It is important to emphasize that there was essentially no difference between the bootstrap and the jackknife RMSD estimators. However, all bias-corrected RMSD estimators had increased standard deviations more than the original RMSD estimator.
In empirical applications of the RMSD estimator, item misfit is often assessed if the RMSD estimate exceeds a certain cutoff value. We also computed the proportion that an RMSD estimate had values larger than a cutoff of 0.05. For a sample size of $N = 125$, the proportion was 0.323 for the original RMSD estimator. However, it was notably smaller for the analytical bias-corrected (abc: 0.158), the bootstrap bias-corrected (bbc: 0.114) and the jackknife bias-corrected (jbc: 0.105) RMSD estimators. For $N = 250$, the proportions turned out to be much smaller, but the bias-corrected estimators still had a substantially lower proportion of declaring an item to be misfitting (orig: 0.100; abc: 0.054; bbc: 0.038; jbc: 0.038). For sample sizes of at least 500, the proportions of RMSD estimates larger than 0.05 were below 0.01.

4.2. Study 2: Simulated 2PL Model, but Fitted 1PL Model

In Study 2, we investigated the statistical behavior of the RMSD statistic in a misspecified IRT model. We simulated item responses according to the 2PL model, but fitted a misspecified IRT model. We used test lengths I of 6, 9, 12, and 15 items. As in Study 1, three base items with item difficulties $b i$ of $− 1.0$, $0.5$, and $2.0$ were used. For the different test lengths, these difficulties were replicated twice, three times, four times, and five times, respectively. For all test lengths, we chose the number of misfitting items $I misfit$ as 1, 2, or 3. All misfitting had item discriminations $a i$ according to one of the four levels 0.0, 0.2, 0.4, or 0.6. All misfitting items in a condition had the same item discrimination. The fitting items had item discriminations of 1.0. With one misfitting item, Item 2 had the misfit. With two misfitting items, Items 2 and 3 showed misfit, and with three misfitting items, Items 1, 2, and 3 were misfitting. We used the same sample sizes $N = 125 , 250 , 1000$ and 2000 as in Study 1. Moreover, we also computed population values of the original RMSD statistic using infinite sample size and computed probabilities of all item response patterns by numerical integration in Equation (2).
In Table 2, the population values of the RMSD estimator are displayed as a function of item discriminations and the number of misfitting items in a test with $I = 9$ items. RMSD values of 6 out of the 9 items are presented in Table 2. It can be seen that the RMSD statistic of the misfitting Item 2 with item discrimination $a i = 0$ was larger in the test with one misfitting item (0.079) compared to a test with three misfitting items (0.036). Interestingly, the RMSD value of fitting items with item discrimination of $a i = 0$ for misfitting items increases with an increasing proportion of misfitting items (e.g., for Item 4, we have for $I misfit = 1$ an RMSD value 0.011 and for $I misfit = 3$ a value of 0.019). Note that a relatively strong item misfit such as $a i = 0.6$ provided RMSD values of 0.027 in the case of $I misfit$=1. This implies that a cutoff of 0.05 for the RMSD value should likely be reduced to detect misfitting items.
In Table 3, we computed the population RMSD value in a test with one misfitting item with item discrimination $a i = 0.2$ and displayed the values as a function of test length. According to Equation (3), the pseudo-true RMSD value for Item 2 was 0.159. The population RMSD value increased from 0.037 (for $I = 6$) to 0.090 (for $I = 15$) and will eventually reach the pseudo-true RMSD value of 0.159 in a test of infinite length. However, note that the test length of $I = 15$ corresponded to a proportion of 0.067 of misfitting items, which is not quite large for practical applications. Hence, one can expect that the population RMSD values for misfitting items are much smaller than the pseudo-true RMSD values in practical applications. Note that the population RMSD values were slightly larger than zero for fitting items (e.g., Items 1,3,4,5, 6 in Table 3). These values converge very slowly to the pseudo-true RMSD value of 0.
Finally, Table 4 displays the behavior of different RMSD estimators in finite samples for a test of $I = 9$ items with one or three misfitting items. In this case, Item 2 was a misfitting item, and Item 5 was a fitting item. It can be seen that the bias-corrected RMSD estimators were similar to the original RMSD estimator only in a large sample of $N = 2000$. In smaller samples, the original RMSD estimator had larger values and the bias-corrected RMSD estimators had lower values than the asymptotic limit of the RMSD estimator (i.e., the population value). Notably, the analytical bias-corrected RMSD estimator had better finite-sample performance in terms of reaching the population value than the bootstrap- or jackknife-based RMSD estimator. As in Study 1, it can be seen that the bootstrap- and jackknife-based RMSD estimators were most effective in reducing the bias of the RMSD value for fitting items. Overall, given the larger negative bias of bootstrap and jackknife RMSD estimators, we tend to prefer the analytical bias-corrected RMSD estimator for practical use because it has a much lower smaller negative bias for misfitting items and still outperformed the original RMSD estimator.

4.3. Study 3: Unbalanced Differential Item Functioning

Differential item functioning (DIF; [40,41,42]) exists if item parameters differ in subgroups of the population or are functions of covariates. For example, in educational LSA studies such as PISA, item parameters can differ across countries [43]. In this situation, the RMSD statistic is applied to detect items with DIF across countries [19,22,23,28].
In Study 3, we investigate a misspecified IRT model in which an item difficulty is shifted in the 1PL model. This corresponds to the situation of uniform DIF [28]. That is, an item difficulty $b i$ is used for generating the data, but a parameter $b i * ≠ b i$ is used in fitting the IRT model. In more detail, we fixed item parameters to known values for the computation of the RMSD statistic. As in Study 1 and 2, we use three base items with item difficulties $− 1.0$, $0.5$, and $2.0$. We considered scenarios with one, two, or three misfitting items in a test comprising of $I = 9$ items. The base items were three times replicated. For misfitting items, we generated item responses with item difficulties $b i * = b i + δ$, where the uniform DIF effect $δ$ was varied as 0.2, 0.4, 0.6, and 1.0. Note that all DIF effects were positive, resulting in an unbalanced DIF condition [44]. In the computation of the RMSD statistic, we used to incorrect item parameters $b i$ that ignored DIF. In the scenario with one misfitting item, Item 2 had item misfit (i.e., DIF). In the scenarios with two or three misfitting items, Items 2 and 3 and Items 1, 2, and 3 had item misfit, respectively.
In Table 5, the population RMSD values are displayed as a function of the size of the uniform DIF effect $δ$ and the number of misfitting items. It can be seen that the RMSD values increased with larger DIF $δ$ and decreased with a larger number of misfitting items. Interestingly, for Item 2, the population RMSD values with one misfitting item were not substantially smaller than their pseudo-true RMSD values of 0.041 ($δ = 0.2$), 0.080 ($δ = 0.4$), 0.117 ($δ = 0.6$), and 0.186 ($δ = 1.0$). Hence, assessing DIF effects with the RMSD statistic seemed to be more effective than assessing item misfit regarding the functional form of the IRF (i.e., incorrect specification of the item discrimination by imposing a 1PL model). In alignment with Study 2, RMSD population values for misfitting items were also smaller for a larger proportion of misfitting items. Note that the RMSD population values for fitting items increased with a larger proportion of misfitting items. For example, in the scenario with uniform DIF effects of $δ = 1.0$, the population RMSD value of the misfitting Item 2 was even lower (i.e., 0.062) than the population RMSD value of the fitting Item 5 (i.e., 0.065). Of course, this property is a consequence that RMSD values for items with the same DIF effect, but different item difficulties can be substantially different [22].
In Table 6, finite sample properties of different RMSD estimators are presented for the scenario involving one or three misfitting items for a uniform DIF effect $δ = 0.6$. As in Study 2, it can be concluded that the analytical bias-corrected RMSD estimator (abc) should be preferred over the bootstrap- or jackknife-based RMSD estimators (bbc or jbc) regarding the negative bias for misfitting in smaller samples. For fitting items, the loss in efficiency in bias reduction of the abc RMSD estimator compared to the bbc or jbc seems to be defensible.

4.4. Study 4: Comparing Balanced and Unbalanced Differential Item Functioning

Finally, Study 4 compares balanced and unbalanced DIF. In balanced DIF, DIF in item difficulties cancels out at the test level [45]. That is, there are positive and negative DIF effects, and it can be assumed that there is no average bias at the test level. We investigated two misfitting items in tests of length $I = 6 , 9 , 12 , 15$. We used the same item difficulties as in other studies in this section. For the item difficulty of Item 2, we used $b i + δ$ in the data-generating model, while we used $b i − δ$ for Item 3 in the scenario with balanced DIF. For a comparison with the unbalanced DIF condition, we also used $b i + δ$ as the item difficulty for Item 3 as in Study 3. We fixed the DIF effect $δ$ to 0.6 and varied test lengths I as 6, 9, 12, and 15.
In Table 7, the population RMSD values are presented as a function of test length. The pseudo-true parameter for Item 2 in the simulation was 0.117. For Item 3 it was 0.069 for unbalanced DIF, and it was 0.091 for balanced DIF. Interestingly, the population RMSD values for the misfitting items almost reached the pseudo-true RMSD values in the case of balanced DIF for all studied test lengths. This was in contrast to the scenario of unbalanced DIF in which the population RMSD value increased with increasing test lengths. This finding can be explained by the fact that we will not expect strongly biased individual posterior distributions and observed IRFs in the case of balanced DIF. In contrast, posterior distributions and observed IRFs will be typically biased in the scenario of unbalanced DIF.

5. Discussion

In this article, we systematically studied the behavior of the RMSD estimators in infinite sample sizes (i.e., at the population level) and finite sample sizes. It turned out that the the population RMSD value depended on the proportion of misfitting items. With a larger proportion of misfitting items, RMSD values of misfitting items decrease, but RMSD values of fitting items increase. This means that the RMSD item fit statistic must always be interpreted as a relative fit statistic. The RMSD item fit statistic depends on the properties of the other items appearing in the test.
As with all simulation studies, our study is limited to the studied conditions. We only investigated relatively short test lengths, although the findings can be expected to generalize to longer tests. We also used only a few simulation factor levels for the proportion of missing items. Finally, we restricted ourselves to the study of a misspecified 1PL model (see [21,46] for more complex misspecified item response functions) and uniform differential item functioning.
Moreover, it was demonstrated that the RMSD estimator was positively biased in smaller samples. This can be explained by the fact that the RMSD is defined as a discrepancy statistic. A discrepancy statistic will always be positive in small samples due to sampling variability. This property has also been shown for global fit statistics in structural equation modeling [47,48]. As the developments in structural equation modeling [49], we pursued the route of constructing bias-corrected estimators for the RMSD based on an analytical treatment and a fully computational solution based on bootstrap and jackknife resampling approaches. While the original RMSD estimator showed a positive bias for misfitting items, our proposed bias-corrected RMSD alternatives were negatively biased. However, the analytical bias-corrected RMSD estimator had the most desirable properties and could be recommended for default use in applied research. Future research might consider an average estimator of the original RMSD estimate and a bias-corrected RMSD estimator with an even lower bias while also not increasing the standard deviation of the resulting estimator.
Interestingly, other fit statistics such as item outfit [50] or the $Q 1$ statistic [46] also involve the distance $( P i − P i * ) 2$ as an effect size but replace the weighting by the density f with a weighting function that standardizes the squared distance by $P i * ( 1 − P i * )$. Pursuing this idea further, it might be interesting to investigate a more general RMSD statistic of the type $∫ ( P i ( θ ) − P i ( θ ) ) 2 ω i ( θ ) d θ$ with an appropriate weighting function $ω i$ (see also [51]).
Finally, we argued that the RMSD values depend on test length and the proportion of misfitting items. Hence, using a general cutoff value for declaring misfitting items might not be justified. Indeed, it was also acknowledged by researchers Matthias von Davier and Ummugul Bezirhan [52] that misfitting items should be detected by assuming a mixture distribution of RMSD values of misfitting and fitting items. Items with large RMSD values are treated as outliers and will be detected by techniques from robust statistics ([52]; see also [53]). We think that such an approach is a promising direction for future research. The approach of von Davier and Bezirhan implies that RMSD cutoff values must be selected dependent on conditions appearing in a particular dataset. Identifying misfitting items as outliers corresponds to the idea that only a portion of the items in a test do not follow an assumed functional form of the item response function. In our opinion, it can be questioned whether item misfit could be rather unsystematically distributed. We argued elsewhere that a particular IRT model is chosen on purpose, and item or model misfit should play no or only a minor role in model selection [54].

Funding

This research received no external funding.

Not applicable.

Not applicable.

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
 1PL one-parameter logistic model 2PL two-parameter logistic model DIF differential item functioning IRF item response function IRT item response theory LSA large-scale assessment PISA programme for international student assessment RMSD root mean square deviation RMSE root mean square error

References

1. van der Linden, W.J.; Hambleton, R.K. (Eds.) Handbook of Modern Item Response Theory; Springer: New York, NY, USA, 1997. [Google Scholar] [CrossRef]
2. Rutkowski, L.; von Davier, M.; Rutkowski, D. (Eds.) A Handbook of International Large-Scale Assessment: Background, Technical Issues, and Methods of Data Analysis; Chapman Hall/CRC Press: London, UK, 2013. [Google Scholar] [CrossRef]
3. OECD. PISA 2009. Technical Report; OECD: Paris, France, 2012; Available online: https://bit.ly/3xfxdwD (accessed on 29 April 2022).
4. Bock, R.D.; Moustaki, I. Item response theory in a general framework. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 469–513. [Google Scholar] [CrossRef]
5. Yen, W.M.; Fitzpatrick, A.R. Item response theory. In Educational Measurement; Brennan, R.L., Ed.; Praeger Publishers: Westport, CT, USA, 2006; pp. 111–154. [Google Scholar]
6. Casabianca, J.M.; Lewis, C. IRT item parameter recovery with marginal maximum likelihood estimation using loglinear smoothing models. J. Educ. Behav. Stat. 2015, 40, 547–578. [Google Scholar] [CrossRef][Green Version]
7. Woods, C.M. Empirical histograms in item response theory with ordinal data. Educ. Psychol. Meas. 2007, 67, 73–87. [Google Scholar] [CrossRef]
8. Xu, X.; von Davier, M. Fitting the Structured General Diagnostic Model to NAEP Data; (Research Report No. RR-08-28); Educational Testing Service: Princeton, NJ, USA, 2008. [Google Scholar] [CrossRef]
9. Yen, W.M. Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Appl. Psychol. Meas. 1984, 8, 125–145. [Google Scholar] [CrossRef]
10. Rasch, G. Probabilistic Models for Some Intelligence and Attainment Tests; Danish Institute for Educational Research: Copenhagen, Denmark, 1960. [Google Scholar]
11. Birnbaum, A. Some latent trait models and their use in inferring an examinee’s ability. In Statistical Theories of Mental Test Scores; Lord, F.M., Novick, M.R., Eds.; MIT Press: Reading, MA, USA, 1968; pp. 397–479. [Google Scholar]
12. Bock, R.D.; Aitkin, M. Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika 1981, 46, 443–459. [Google Scholar] [CrossRef]
13. Aitkin, M. Expectation maximization algorithm and extensions. In Handbook of Item Response Theory, Vol. 2: Statistical Tools; van der Linden, W.J., Ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 217–236. [Google Scholar] [CrossRef]
14. Robitzsch, A. A note on a computationally efficient implementation of the EM algorithm in item response models. Quant. Comput. Methods Behav. Sc. 2021, 1, e3783. [Google Scholar] [CrossRef]
15. Sinharay, S.; Haberman, S.J. How often is the misfit of item response theory models practically significant? Educ. Meas. 2014, 33, 23–35. [Google Scholar] [CrossRef]
16. Swaminathan, H.; Hambleton, R.K.; Rogers, H.J. Assessing the fit of item response theory models. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 683–718. [Google Scholar] [CrossRef]
17. Khorramdel, L.; Shin, H.J.; von Davier, M. GDM software mdltm including parallel EM algorithm. In Handbook of Diagnostic Classification Models; von Davier, M., Lee, Y.S., Eds.; Springer: Cham, Switzerland, 2019; pp. 603–628. [Google Scholar] [CrossRef]
18. Kunina-Habenicht, O.; Rupp, A.A.; Wilhelm, O. A practical illustration of multidimensional diagnostic skills profiling: Comparing results from confirmatory factor analysis and diagnostic classification models. Stud. Educ. Eval. 2009, 35, 64–70. [Google Scholar] [CrossRef]
19. Joo, S.H.; Khorramdel, L.; Yamamoto, K.; Shin, H.J.; Robin, F. Evaluating item fit statistic thresholds in PISA: Analysis of cross-country comparability of cognitive items. Educ. Meas. 2021, 40, 37–48. [Google Scholar] [CrossRef]
20. Sueiro, M.J.; Abad, F.J. Assessing goodness of fit in item response theory with nonparametric models: A comparison of posterior probabilities and kernel-smoothing approaches. Educ. Psychol. Meas. 2011, 71, 834–848. [Google Scholar] [CrossRef]
21. Köhler, C.; Robitzsch, A.; Hartig, J. A bias-corrected RMSD item fit statistic: An evaluation and comparison to alternatives. J. Educ. Behav. Stat. 2020, 45, 251–273. [Google Scholar] [CrossRef]
22. Tijmstra, J.; Bolsinova, M.; Liaw, Y.L.; Rutkowski, L.; Rutkowski, D. Sensitivity of the RMSD for detecting item-level misfit in low-performing countries. J. Educ. Meas. 2020, 57, 566–583. [Google Scholar] [CrossRef]
23. Buchholz, J.; Hartig, J. Comparing attitudes across groups: An IRT-based item-fit statistic for the analysis of measurement invariance. Appl. Psychol. Meas. 2019, 43, 241–250. [Google Scholar] [CrossRef]
24. Buchholz, J.; Hartig, J. Measurement invariance testing in questionnaires: A comparison of three multigroup-CFA and IRT-based approaches. Psych. Test Assess. Model. 2020, 62, 29–53. Available online: https://bit.ly/38kswHh (accessed on 29 April 2022).
25. Köhler, C.; Robitzsch, A.; Fährmann, K.; von Davier, M.; Hartig, J. A semiparametric approach for item response function estimation to detect item misfit. Brit. J. Math. Stat. Psychol. 2021, 74, 157–175. [Google Scholar] [CrossRef] [PubMed]
26. Monroe, S. Testing latent variable distribution fit in IRT using posterior residuals. J. Educ. Behav. Stat. 2021, 46, 374–398. [Google Scholar] [CrossRef]
27. Köhler, C.; Hartig, J. Practical significance of item misfit in educational assessments. Appl. Psychol. Meas. 2017, 41, 388–400. [Google Scholar] [CrossRef] [PubMed]
28. Robitzsch, A.; Lüdtke, O. A review of different scaling approaches under full invariance, partial invariance, and noninvariance for cross-sectional country comparisons in large-scale assessments. Psych. Test Assess. Model. 2020, 62, 233–279. Available online: https://bit.ly/3ezBB05 (accessed on 29 April 2022).
29. Robitzsch, A.; Lüdtke, O. Mean comparisons of many groups in the presence of DIF: An evaluation of linking and concurrent scaling approaches. J. Educ. Behav. Stat. 2022, 47, 36–68. [Google Scholar] [CrossRef]
30. Shin, H.J.; Kerzabi, E.; Joo, S.H.; Robin, F.; Yamamoto, K. Comparability of response time scales in PISA. Psych. Test Assess. Model. 2020, 62, 107–135. [Google Scholar]
31. Haberman, S.J.; Sinharay, S. Generalized residuals for general models for contingency tables with application to item response theory. J. Am. Stat. Assoc. 2013, 108, 1435–1444. [Google Scholar] [CrossRef]
32. Haberman, S.J.; Sinharay, S.; Chon, K.H. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions. Psychometrika 2013, 78, 417–440. [Google Scholar] [CrossRef] [PubMed]
33. van Rijn, P.W.; Sinharay, S.; Haberman, S.J.; Johnson, M.S. Assessment of fit of item response theory models used in large-scale educational survey assessments. Large-Scale Assess. Educ. 2016, 4, 10. [Google Scholar] [CrossRef][Green Version]
34. Lim, H.; Choe, E.M.; Han, K.T. A residual-based differential item functioning detection framework in item response theory. J. Educ. Meas. 2022; Epub ahead of print. [Google Scholar] [CrossRef]
35. R Core Team. R: A Language and Environment for Statistical Computing; R Core Team: Vienna, Austria, 2022; Available online: https://www.R-project.org/ (accessed on 11 January 2022).
36. George, A.C.; Robitzsch, A.; Kiefer, T.; Groß, J.; Ünlü, A. The R package CDM for cognitive diagnosis models. J. Stat. Softw. 2016, 74, 1–24. [Google Scholar] [CrossRef][Green Version]
37. Robitzsch, A.; George, A.C. The R package CDM for diagnostic modeling. In Handbook of Diagnostic Classification Models; von Davier, M., Lee, Y.S., Eds.; Springer: Cham, Switzerland, 2019; pp. 549–572. [Google Scholar] [CrossRef]
38. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
39. Kolenikov, S. Resampling variance estimation for complex survey data. Stata J. 2010, 10, 165–199. [Google Scholar] [CrossRef][Green Version]
40. Ellis, J.L.; Van den Wollenberg, A.L. Local homogeneity in latent trait models. A characterization of the homogeneous monotone IRT model. Psychometrika 1993, 58, 417–429. [Google Scholar] [CrossRef]
41. Holland, P.W.; Wainer, H. (Eds.) Differential Item Functioning: Theory and Practice; Lawrence Erlbaum: Hillsdale, NJ, USA, 1993. [Google Scholar] [CrossRef]
42. Penfield, R.D.; Camilli, G. Differential item functioning and item bias. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 125–167. [Google Scholar] [CrossRef]
43. von Davier, M.; Yamamoto, K.; Shin, H.J.; Chen, H.; Khorramdel, L.; Weeks, J.; Davis, S.; Kong, N.; Kandathil, M. Evaluating item response theory linking and model fit for data from PISA 2000–2012. Assess. Educ. 2019, 26, 466–488. [Google Scholar] [CrossRef]
44. Pohl, S.; Schulze, D. Assessing group comparisons or change over time under measurement non-invariance: The cluster approach for nonuniform DIF. Psych. Test Assess. Model. 2020, 62, 281–303. Available online: https://bit.ly/3ANjH3V (accessed on 29 April 2022).
45. Chalmers, R.P.; Counsell, A.; Flora, D.B. It might not make a big DIF: Improved differential test functioning statistics that account for sampling variability. Educ. Psychol. Meas. 2016, 76, 114–140. [Google Scholar] [CrossRef][Green Version]
46. Chalmers, R.P.; Ng, V. Plausible-value imputation statistics for detecting item misfit. Appl. Psychol. Meas. 2017, 41, 372–387. [Google Scholar] [CrossRef]
47. Maydeu-Olivares, A.; Shi, D.; Rosseel, Y. Assessing fit in structural equation models: A Monte-Carlo evaluation of RMSEA versus SRMR confidence intervals and tests of close fit. Struct. Equ. Modeling 2018, 25, 389–402. [Google Scholar] [CrossRef]
48. Shi, D.; Maydeu-Olivares, A.; DiStefano, C. The relationship between the standardized root mean square residual and model misspecification in factor analysis models. Multivar. Behav. Res. 2018, 53, 676–694. [Google Scholar] [CrossRef] [PubMed]
49. Maydeu-Olivares, A. Assessing the size of model misfit in structural equation models. Psychometrika 2017, 82, 533–558. [Google Scholar] [CrossRef] [PubMed]
50. Wright, B.D.; Masters, G.N. Computation of OUTFIT and INFIT statistics. Rasch Meas. Trans. 1990, 3, 84–85. Available online: https://bit.ly/3Nyfzv1 (accessed on 29 April 2022).
51. Oshima, T.C.; Morris, S.B. Raju’s differential functioning of items and tests (DFIT). Educ. Meas. 2008, 27, 43–50. [Google Scholar] [CrossRef]
52. von Davier, M.; Bezirhan, U. A robust method for detecting item misfit in large scale assessments. PsyArXiv 2021. [Google Scholar] [CrossRef]
53. Robitzsch, A. Robust and nonrobust linking of two groups for the Rasch model with balanced and unbalanced random DIF: A comparative simulation study and the simultaneous assessment of standard errors and linking errors with resampling techniques. Symmetry 2021, 13, 2198. [Google Scholar] [CrossRef]
54. Robitzsch, A.; Lüdtke, O. Reflections on analytical choices in the scaling model for test scores in international large-scale assessment studies. PsyArXiv 2021. [Google Scholar] [CrossRef]
Table 1. Study 1: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items as a function of sample size N.
Table 1. Study 1: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items as a function of sample size N.
MeanSDRMSE
ItemNorigabcbbcjbcorigabcbbcjbcorigabcbbcjbc
11250.0420.0200.0130.0130.0200.0250.0230.0230.0460.0320.0260.026
2500.0290.0140.0090.0090.0140.0180.0160.0160.0320.0220.0180.018
5000.0210.0100.0060.0060.0100.0130.0110.0110.0230.0160.0130.013
10000.0150.0070.0050.0050.0070.0090.0080.0080.0170.0120.0100.010
20000.0100.0050.0030.0030.0050.0060.0060.0060.0110.0080.0070.007
21250.0440.0210.0140.0140.0210.0260.0240.0240.0480.0340.0280.028
2500.0310.0140.0090.0090.0140.0190.0170.0170.0340.0240.0190.019
5000.0220.0110.0070.0070.0100.0130.0120.0120.0250.0170.0140.014
10000.0150.0070.0050.0050.0070.0090.0080.0080.0170.0120.0090.009
20000.0110.0060.0040.0040.0050.0070.0060.0060.0120.0090.0070.007
31250.0390.0230.0130.0120.0180.0230.0210.0210.0430.0330.0250.024
2500.0280.0170.0090.0090.0130.0170.0150.0150.0310.0240.0180.018
5000.0190.0110.0060.0060.0090.0120.0110.0110.0220.0160.0120.012
10000.0140.0080.0040.0040.0060.0080.0070.0070.0150.0110.0080.008
20000.0100.0060.0030.0030.0050.0060.0050.0050.0110.0080.0060.006
Note. orig = original RMSD estimator (see Equation (9)); abc = RMSD estimator based on analytical bias correction (see Equation (25)); bbc = RMSD estimator based on bootstrap bias correction (see Equation (27)); jbc = RMSD estimator based on jackknife bias correction (see Equation (29)).
Table 2. Study 2: Population value of the original RMSD estimator in a test with $I = 9$ items as a function of item discriminations of misfitting items and the number of misfitting items.
Table 2. Study 2: Population value of the original RMSD estimator in a test with $I = 9$ items as a function of item discriminations of misfitting items and the number of misfitting items.
$a = 0$$a = 0.2$$a = 0.4$$a = 0.6$
$I misfit$$I misfit$$I misfit$$I misfit$
Item123123123123
10.0110.0180.0360.0080.0140.0330.0060.0110.0260.0040.0070.018
20.0790.0570.0360.0610.0470.0330.0430.0350.0270.0270.0230.018
30.0090.0570.0360.0070.0460.0330.0050.0330.0250.0030.0210.016
40.0110.0180.0190.0080.0140.0170.0060.0110.0140.0040.0070.009
50.0120.0190.0210.0090.0150.0190.0060.0110.0150.0040.0070.010
60.0090.0140.0160.0070.0110.0150.0050.0080.0120.0030.0050.008
Note. a = item discrimination of the misfitting items; $I misfit$ = number of misfitting items; Values of the original RMSD estimator for misfitting items are printed in bold.
Table 3. Study 2: Population value of the original RMSD estimator in a test with one misfitting item with an item discrimination of $a = 0.2$ as a function of the number of items I.
Table 3. Study 2: Population value of the original RMSD estimator in a test with one misfitting item with an item discrimination of $a = 0.2$ as a function of the number of items I.
Item$I = 6$$I = 9$$I = 12$$I = 15$
10.0080.0080.0080.007
20.0370.0610.0780.090
30.0070.0070.0070.006
40.0080.0080.0080.007
50.0080.0090.0090.008
60.0070.0070.0070.006
Note. Values of the original RMSD estimator for misfitting items are printed in bold.
Table 4. Study 2: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items for $I misfit = 1$ or $I misfit = 3$ misfitting items with an item discrimination of $a = 0.2$ as a function of sample size N.
Table 4. Study 2: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items for $I misfit = 1$ or $I misfit = 3$ misfitting items with an item discrimination of $a = 0.2$ as a function of sample size N.
MeanSDRMSE
Item$I misfit$Norigabcbbcjbcorigabcbbcjbcorigabcbbcjbc
211250.0770.0630.0550.0550.0330.0400.0420.0420.0370.0400.0420.042
2500.0710.0640.0600.0600.0260.0290.0310.0310.0280.0300.0310.031
5000.0700.0660.0640.0640.0190.0200.0210.0210.0210.0210.0210.021
10000.0680.0670.0660.0660.0130.0140.0140.0140.0150.0150.0150.015
20000.0680.0670.0670.0670.0090.0090.0100.0100.0120.0110.0110.011
31250.0660.0500.0430.0430.0300.0380.0380.0380.0450.0410.0400.040
2500.0610.0520.0480.0480.0240.0290.0300.0300.0370.0340.0330.033
5000.0570.0530.0510.0500.0180.0200.0210.0210.0300.0280.0270.027
10000.0570.0550.0540.0540.0130.0130.0130.0130.0270.0250.0250.024
20000.0560.0550.0550.0550.0090.0090.0090.0090.0240.0240.0230.023
511250.0440.0200.0140.0130.0200.0260.0230.0230.0400.0280.0240.024
2500.0320.0170.0120.0120.0150.0200.0180.0180.0280.0210.0190.018
5000.0230.0120.0090.0090.0110.0140.0140.0130.0180.0150.0140.013
10000.0180.0100.0070.0070.0080.0110.0100.0100.0120.0110.0100.010
20000.0130.0080.0060.0060.0060.0080.0080.0080.0080.0080.0090.009
31250.0490.0280.0220.0220.0230.0300.0290.0290.0380.0310.0290.029
2500.0370.0230.0180.0180.0180.0230.0230.0230.0250.0230.0230.023
5000.0320.0230.0190.0190.0140.0180.0180.0180.0190.0180.0180.018
10000.0290.0240.0220.0220.0110.0130.0140.0140.0140.0140.0140.014
20000.0280.0250.0240.0240.0080.0090.0090.0090.0110.0110.0100.010
Note. orig = original RMSD estimator (see Equation (9)); abc = RMSD estimator based on analytical bias correction (see Equation (25)); bbc = RMSD estimator based on bootstrap bias correction (see Equation (27)); jbc = RMSD estimator based on jackknife bias correction (see Equation (29)); Values of RMSD estimators for misfitting items are printed in bold.
Table 5. Study 3: Population value of the original RMSD estimator in a test with $I = 9$ items as a function of uniform differential item functioning of misfitting items and the number of misfitting items.
Table 5. Study 3: Population value of the original RMSD estimator in a test with $I = 9$ items as a function of uniform differential item functioning of misfitting items and the number of misfitting items.
$δ = 0.2$$δ = 0.4$$δ = 0.6$$δ = 1.0$
$I misfit$$I misfit$$I misfit$$I misfit$
Item123123123123
10.0050.0060.0260.0090.0120.0540.0130.0180.0830.0190.0270.144
20.0350.0320.0270.0690.0620.0530.1010.0920.0770.1600.1460.122
30.0040.0180.0170.0080.0350.0310.0120.0490.0430.0190.0720.062
40.0050.0060.0120.0090.0120.0250.0130.0180.0360.0190.0270.059
50.0060.0090.0140.0110.0180.0270.0160.0260.0400.0260.0400.065
60.0040.0070.0090.0080.0140.0170.0120.0200.0260.0190.0320.042
Note. δ = DIF in item difficulties; $I misfit$ = number of misfitting items; Values of the original RMSD estimator for misfitting items are printed in bold.
Table 6. Study 3: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items for $I misfit = 1$ or $I misfit = 3$ misfitting items with a uniform DIF effect of $δ = 0.6$ as a function of sample size N.
Table 6. Study 3: Mean, standard deviation (SD) and root mean square error (RMSE) for different estimators of the RMSD statistic in a test with $I = 9$ items for $I misfit = 1$ or $I misfit = 3$ misfitting items with a uniform DIF effect of $δ = 0.6$ as a function of sample size N.
MeanSDRMSE
Item$I misfit$Norigabcbbcjbcorigabcbbcjbcorigabcbbcjbc
211250.1050.0960.0900.0890.0340.0390.0420.0420.0350.0390.0430.043
2500.1030.1000.0970.0970.0250.0270.0280.0280.0250.0270.0280.028
5000.1020.1000.0990.0990.0180.0190.0190.0190.0180.0190.0190.019
10000.1010.1000.0990.0990.0130.0130.0130.0130.0130.0130.0140.014
20000.1020.1010.1010.1010.0090.0090.0090.0090.0090.0090.0090.009
31250.0830.0720.0640.0630.0330.0390.0420.0410.0330.0400.0440.044
2500.0810.0760.0710.0710.0250.0270.0290.0290.0250.0270.0300.030
5000.0780.0760.0740.0740.0180.0190.0200.0200.0180.0190.0200.020
10000.0770.0760.0750.0750.0130.0140.0140.0140.0130.0140.0140.014
20000.0780.0770.0770.0770.0090.0090.0090.0090.0090.0090.0090.009
511250.0460.0240.0170.0170.0220.0280.0270.0270.0370.0290.0270.027
2500.0340.0180.0140.0130.0170.0210.0200.0200.0240.0210.0210.021
5000.0260.0160.0120.0120.0130.0160.0160.0160.0160.0160.0170.017
10000.0210.0140.0110.0110.0100.0120.0120.0120.0110.0120.0130.013
20000.0190.0150.0130.0130.0080.0100.0100.0100.0080.0100.0110.011
31250.0570.0370.0300.0290.0270.0340.0350.0350.0320.0350.0360.036
2500.0480.0360.0310.0310.0220.0270.0280.0280.0230.0270.0300.029
5000.0440.0380.0340.0340.0180.0210.0220.0220.0180.0210.0230.023
10000.0410.0380.0360.0360.0120.0140.0150.0150.0130.0140.0150.015
20000.0410.0400.0390.0390.0090.0090.0100.0100.0090.0090.0100.010
Note. orig = original RMSD estimator (see Equation (9)); abc = RMSD estimator based on analytical bias correction (see Equation (25)); bbc = RMSD estimator based on bootstrap bias correction (see Equation (27)); jbc = RMSD estimator based on jackknife bias correction (see Equation (29)); Values of RMSD estimators for misfitting items are printed in bold.
Table 7. Study 4: Population value of the original RMSD estimator in a test with two misfitting items with an uniform DIF effects of $| δ | = 0.6$ for balanced DIF and unbalanced DIF as a function of the number of items I.
Table 7. Study 4: Population value of the original RMSD estimator in a test with two misfitting items with an uniform DIF effects of $| δ | = 0.6$ for balanced DIF and unbalanced DIF as a function of the number of items I.
Balanced DIFUnbalanced DIF
Item$I = 6$$I = 9$$I = 12$$I = 15$$I = 6$$I = 9$$I = 12$$I = 15$
10.0090.0060.0040.0030.0260.0180.0130.011
20.1120.1140.1150.1150.0790.0920.0980.102
30.0920.0920.0920.0910.0390.0490.0540.057
40.0090.0060.0040.0030.0260.0180.0130.011
50.0060.0040.0030.0030.0390.0260.0190.015
60.0020.0020.0010.0010.0300.0200.0150.012
Note. Values of the original RMSD estimator for misfitting items are printed in bold.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Robitzsch, A. Statistical Properties of Estimators of the RMSD Item Fit Statistic. Foundations 2022, 2, 488-503. https://doi.org/10.3390/foundations2020032

AMA Style

Robitzsch A. Statistical Properties of Estimators of the RMSD Item Fit Statistic. Foundations. 2022; 2(2):488-503. https://doi.org/10.3390/foundations2020032

Chicago/Turabian Style

Robitzsch, Alexander. 2022. "Statistical Properties of Estimators of the RMSD Item Fit Statistic" Foundations 2, no. 2: 488-503. https://doi.org/10.3390/foundations2020032