Next Article in Journal
The Magical “Born Rule” and Quantum “Measurement”: Implications for Physics
Previous Article in Journal
Lévy Walks as a Universal Mechanism of Turbulence Nonlocality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relating the One-Parameter Logistic Diagnostic Classification Model to the Rasch Model and One-Parameter Logistic Mixed, Partial, and Probabilistic Membership Diagnostic Classification Models

by
Alexander Robitzsch
1,2
1
IPN–Leibniz Institute for Science and Mathematics Education, Olshausenstraße 62, 24118 Kiel, Germany
2
Centre for International Student Assessment (ZIB), Olshausenstraße 62, 24118 Kiel, Germany
Foundations 2023, 3(3), 621-633; https://doi.org/10.3390/foundations3030037
Submission received: 30 August 2023 / Revised: 15 September 2023 / Accepted: 20 September 2023 / Published: 21 September 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
Diagnostic classification models (DCMs) are statistical models with discrete latent variables (so-called skills) to analyze multiple binary variables (i.e., items). The one-parameter logistic diagnostic classification model (1PLDCM) is a DCM with one skill and shares desirable measurement properties with the Rasch model. This article shows that the 1PLDCM is indeed a latent class Rasch model. Furthermore, the relationship of the 1PLDCM to extensions of the DCM to mixed, partial, and probabilistic memberships is treated. It is argued that the partial and probabilistic membership models are also equivalent to the Rasch model. The fit of the different models was empirically investigated using six datasets. It turned out for these datasets that the 1PLDCM always had a worse fit than the Rasch model and mixed and partial membership extensions of the DCM.

1. Introduction

In the social sciences, humans (i.e., subjects or students) respond in a test to multiple tasks (i.e., items). For example, students are asked to solve items in a mathematics test, or patients are asked to report whether particular symptoms occurred. These tests result in multivariate datasets in which each item possesses dichotomous values of zero or one.
These kinds of multivariate data are frequently analyzed by statistical models that summarize the set of items in a single (latent) factor variable. Diagnostic classification models (DCM; [1,2,3,4,5,6]) are statistical models that provide classifications of subjects for their proficiency in an administered test. In these models, the latent variables are discrete, whereas they are continuous in item response theory (IRT; [7,8,9]) models. DCMs are now frequently used in educational measurement [10,11] and clinical psychology [12]. The software packages GDINA [13,14] and CDM [15,16] implemented the most important DCMs and made the DCM class widely accessible.
In a recent article, researchers Madison, Wind, Maas, Yamaguchi, and Haab [17] proposed the one-parameter logistic diagnostic classification model (1PLDCM) as a particularly constrained DCM. The 1PLDCM possesses only one parameter per item, whereas earlier proposed DCMs use two parameters per item. Madison et al. recommend the 1PLDCM because it shares some desirable statistical properties with the Rasch model (RM; [18]), which is one of the most popular IRT models. In this article, it is shown that the 1PLDCM is indeed a particular latent class variant of the RM. This finding provides a more in-depth conceptual insight into the statistical development of [17]. Furthermore, it clearly demonstrates that the RM model can be seen as a unifying model that allows the summary of the multivariate vector of items in continuous latent as well as discrete latent variables. Most DCMs use binary latent variables. Because this article shows that the 1PLDCM is a latent class RM with two classes, the one-parameter DCM can be statistically compared to RMs with a larger number of latent classes (i.e., more than two latent classes). Hence, it could be investigated whether the binary classification of subjects could potentially be improved by using a larger number of classes.
Furthermore, the relationships between ordinary DCMs and DCM extensions with mixed, partial, and probabilistic membership are investigated. These extensions recently appeared in the literature [19]. However, the relationship of these extensions to unidimensional and multidimensional IRT models has not yet been thoroughly studied. In this article, the different membership type extensions of the one-parameter DCM are discussed. In fact, it is shown that partial and probabilistic membership DCMs are equivalent to unidimensional DCMs or for DCMs with multiple latent variables and a simple structure of items. Furthermore, the one-parameter partial membership DCM for unidimensional skills is equivalent to the RM with a particular bounded distribution of the latent variable.
The rest of this article is organized as follows. Section 2 reviews unidimensional IRT models in general and focuses on the RM and the generalized logistic IRT model. Section 3 discusses unidimensional DCMs. The recently proposed 1PLDCM is related to the latent class RM. In Section 4, mixed, partial, and probabilistic membership extensions of the DCM are described. Section 5 compares the different models utilizing six datasets. Finally, the article closes with a discussion in Section 6.

2. Unidimensional Item Response Models

In this section, IRT models and their implementation (Section 2.1) are reviewed. Afterwards, we discuss the RM (Section 2.2) and the generalized logistic IRT model (Section 2.3) as an extension of the RM.
Let X = ( X 1 , , X I ) be a vector of I binary random variables X i ( i = 1 , , I ). The random variables X i are also referred to as items or item responses. A unidimensional IRT model [9,20,21] parametrizes the multivariate distribution P ( X = x ) for x = ( x 1 , , x I ) { 0 , 1 } I as
P ( X = x ) = i = 1 I P i ( θ ; γ i ) x i 1 P i ( θ ; γ i ) 1 x i d F δ ( θ ) ,
where F δ is the distribution function of the latent trait θ (also referred to as the ability variable) depending on a parameter δ . The latent trait θ can take any values between minus infinity and plus infinity. This random variable can have a continuous or a discrete distribution or a mixture of both. Moreover, the unidimensional latent variable θ could also be replaced by a multidimensional latent variable θ in (1), resulting in a multidimensional IRT model. The function P i ( θ ; γ i ) = P ( X i = 1 | θ ) is referred to as the item response function (IRF) for item i, which depends on a parameter γ i . The specification of the probability distribution (1) implies that items i = 1 , , I are conditionally independent given the latent trait θ . Some identification constraints on item parameters γ i or distribution parameters α must be imposed to ensure model identification [22].
After the IRT model (1) has been estimated, individual ability estimates θ ^ can be estimated by maximizing the log-likelihood function l that provides the most likely ability estimate for θ given a vector of item responses x for a subject. The log-likelihood function is given by (see [23])
l ( θ ; x ) = i = 1 I x i log P i ( θ ; γ i ) + ( 1 x i ) log 1 P i ( θ ; γ i )
By taking the derivative of l with respect to θ in (2), the ability estimate θ ^ fulfills the nonlinear equation
l θ | θ = θ ^ = i = 1 I x i P i ( θ ^ ; γ i ) P i ( θ ^ ; γ i ) ( 1 x i ) P i ( θ ^ ; γ i ) 1 P i ( θ ^ ; γ i ) = 0 ,
where P i ( θ ) = ( P i ) / ( θ ) . Note that the individual ability estimates θ ^ can also take values minus infinity or plus infinity, in particular, if  i = 1 I x i equals either 0 or I.

2.1. Implementation

The IRT model (1) can be estimated using marginal maximum likelihood (MML) using an expectation–maximization (EM) algorithm [24,25]. Alternatively, the estimation could be carried out using Newton–Raphson algorithms. In the R [26] software, the packages mirt [27] or sirt [28] can be utilized to estimate the IRT model (1) with user-defined IRFs P i ( θ ; γ i ) ( i = 1 , , I ) and distribution functions F δ . In the mirt [27] package, the function mirt::mirt() can be used to estimate user-defined IRT models. When relying on this function, user-defined IRFs P i can be specified with mirt::createItem(), whereas user-defined distributions F δ can be specified with mirt::createGroup(). In the sirt [28] package, the function sirt::xxirt() can be used in combination with sirt::xxirt_createDiscItem() (for defining IRFs) and sirt::xxirt_createThetaDistribution() (for defining the distribution  F δ ).

2.2. Rasch Model

The RM [18] belongs to the class of IRT models that use the logistic link function in the IRF P i . The IRF is given by
P i ( θ ) = exp ( θ b i ) 1 + exp ( θ b i ) = Ψ ( θ b i ) ,
where b i denotes the item difficulty. Furthermore, Ψ ( x ) = 1 + exp ( x ) 1 denotes the logistic link function. This is why the RM is also referred to as the one-parameter logistic (1PL) IRT model.
In addition, an identification constraint must be imposed when estimating the RM (4) (see [29]). This can be directly seen from (4) because the parameters θ and b i can be arbitrarily shifted by adding a constant c without changing the probabilities P i ( θ ) (i.e., by defining θ * = θ + c and b i * = b i + c for i = 1 , , I ). If a normal distribution is assumed for θ , the mean is frequently fixed to zero (i.e., E ( θ ) = 0 ). Alternatively, the item difficulty of some item  i 0 could be set to zero (i.e., b i 0 = 0 ). However, the variance σ 2 of  θ can be estimated. Alternatively, one could fix the variance of θ in the RM to 1 and specify the IRF as
P i ( θ ) = exp σ ( θ b i ) 1 + exp σ ( θ b i ) .
In this case, a common item discrimination σ is estimated across items.
The RM fulfills the property of invariant item ordering [17]. That is, the order of the persons must be the same for all items. If  P i ( θ 1 ) < P j ( θ 1 ) for persons with ability θ 1 and items i j , then it must hold P i ( θ 2 ) < P j ( θ 2 ) for all θ 2 θ 1 . This property implies that IRFs must be parallel; that is, they must not intersect.
The general IRT model (1) simplifies in the case of the RM to
P ( X = x ) = i = 1 I exp x i ( θ b i ) 1 + exp ( θ b i ) d F δ ( θ ) .
Note that (6) can be further simplified to
P ( X = x ) = exp l ( θ ; x ) d F δ ( θ ) , where l ( θ ; x ) = i = 1 I x i θ i = 1 I b i i = 1 I log 1 + exp ( θ b i ) ,
and l is the individual log-likelihood function defined in (2). Due to (7), one recognizes that the (unweighted) sum score i = 1 I X i is a sufficient statistic for θ (see [17,29]). This property receives high attraction among practitioners and some measurement enthusiasts (see, e.g., [30,31,32,33,34,35]).
Different distributions of θ can be specified in F δ in the estimation of the RM. As mentioned above, the normal distribution is most frequently utilized. This assumption implies that a continuous distribution is used to characterize the θ distribution. Alternatively, skewed distributions for θ can be implemented based on log-linear smoothing of probabilities P ( θ = θ t ) on a grid with T fixed trait values θ 1 , , θ T (see [36,37]). Furthermore, a discrete distribution of θ with C 2 latent classes can be assumed. In this case, class locations θ c and probabilities π c = P ( θ = θ c ) must be estimated ( c = 1 , , C ). This model is called the latent class Rasch model (LCRM; [38,39,40,41]). Note that the general RM (7) simplifies in the case of the LCRM to
P ( X = x ) = c = 1 C exp l ( θ c ; x ) π c .
The individual likelihood l in (8) still depends on the item difficulties b i . However, the population of persons is partitioned into C classes. Note that some identification constraint on the location of the θ distribution is required. For example, one could fix the first θ location point to zero (i.e., θ 1 = 0 ). In a test consisting of I items, at most  I / 2 located class models can be specified in the LCRM because model parameters in larger models cannot be identified [38]. Notably, MMLLC imposes only weak assumptions about the data-generating distribution F δ , but it relies on a possible doubtful discrete representation of the θ distribution. Classifying persons into different discrete ability levels might be conceptually appealing in empirical applications [42].
It should be emphasized that the sum score i = 1 I X i remains a sufficient statistic for θ , irrespective of the assumed distribution for θ . There exist dozens of estimation methods for the RM, each relying on slightly different assumptions [43].

2.3. Generalized Logistic Item Response Model

The RM is one of the simplest IRT models. Each item is described by a single item parameter b i . A more general one-parameter IRT model can be defined by the IRFs
P i ( θ ) = g σ ( θ b i ) ,
where g : R ( 0 , 1 ) is a monotone link function. In the RM, the logistic function Ψ corresponds to g in (9). In (9), the mean (i.e., location) and the standard deviation (i.e., scale) of the θ distribution are fixed. Hence, a common item discrimination σ appears in (9).
Notably, (9) defines a general class of IRT models that share the invariant item-ordering property [44,45]. Alternative link functions to the logistic one, such as the loglog or cloglog link functions [46], can be chosen in (9) (see [47,48]). However, it must be emphasized that the sum score is no longer a sufficient statistic for θ in the general one-parameter IRT model defined in (9).
The class of generalized logistic link functions covers many important link functions as particular cases [49]. The generalized logistic link function  Ψ α 1 , α 2 depend on two parameters α 1 and α 2 that model deviations from the logistic link functions. For asymmetry parameters α 1 and α 2 (which should be estimated within the interval ( 1 , 1 ) to provide reasonable estimates; see [50]), the link function Ψ α 1 , α 2 is defined by
Ψ α 1 , α 2 ( x ) = Ψ ( S α 1 , α 2 ( x ) ) ,
where S α 1 , α 2 is defined by
S α 1 , α 2 ( x ) = α 1 1 exp ( α 1 x ) 1 if x 0 and α 1 > 0 x if x 0 and α 1 = 0 α 1 1 log ( 1 α 1 x ) if x 0 and α 1 < 0 α 2 1 exp ( α 2 x ) 1 if x < 0 and α 2 > 0 x if x < 0 and α 2 = 0 α 2 1 log ( 1 + α 2 x ) if x < 0 and α 2 < 0
The logistic link function is obtained with α 1 = α 2 = 0 . The probit link function is approximately obtained with α 1 = α 2 = 0.12 . More generally, symmetric link functions are obtained for α 1 = α 2 , whereas asymmetry is introduced by imposing α 1 α 2 .
A one-parameter IRT model based on the generalized logistic link function can be defined by the IRFs
P i ( θ ) = Ψ α 1 , α 2 σ ( θ b i ) ,
where b i is the item difficulty parameter that depends on the scale of the link function  Ψ α 1 , α 2 . The generalized logistic link function has been applied in IRT models in [50,51,52,53]. Importantly, researchers can estimate the joint shape parameters α 1 and α 2 in the one-parameter model (12) to allow deviations from the RM, whereas retaining the invariant item ordering property but losing the property that the sum score is a sufficient statistic.

3. Unidimensional Diagnostic Classification Models

In this section, unidimensional DCMs are discussed. In this case, the latent variable θ from the general IRT model (1) is replaced with the binary latent variable α (also referred to as an attribute or skill; [4]) that can take values 0 or 1. The value of 1 indicates mastery, whereas α = 0 indicates non-mastery of the skill. The IRT model (1) can then be written as
P ( X = x ) = α = 0 1 i = 1 I P i ( α ; γ i ) x i 1 P i ( α ; γ i ) 1 x i π α ,
where π 0 + π 1 = 1 . The parameter π 1 quantifies the proportion of students that master the skill α . The DCM with one skill is also called the mastery model [54,55].

3.1. Two-Parameter Diagnostic Classification Model

The IRF P i in unidimensional DCM with a binary skill requires the specification of the two values P i ( 1 ) = P ( X i = 1 | α = 1 ) and P i ( 0 ) = P ( X i = 1 | α = 0 ) . Clearly, these values should range between 0 and 1 because these are conditional item response probabilities. One can formulate the trivial equation for the IRF
P i ( α ) = P i ( 0 ) ( 1 α ) + P i ( 1 ) α = P i ( 0 ) + P i ( 1 ) P i ( 0 ) α .
With an arbitrary monotonically increasing and continuous link function g, the IRF in (14) can be rewritten as
P i ( α ) = g ( λ i 0 + λ i 1 α ) ,
where λ i 0 = g 1 P i ( 0 ) and λ i 1 = g 1 P i ( 1 ) g 1 P i ( 0 ) . Note that the item parameters  λ i h (for h = 0 , 1 ) depend on the scale of the chosen link function g. Different link functions were discussed in the generalized deterministic inputs, noisy “and” gate (GDINA) model [56].
If the item parameters λ i h are separately estimated for all items, the choice of the link function g does not matter because all models are statistically equivalent; that is, item parameters can be transformed without affecting the fit of the model. A particular class of DCMs emerges if the logistic link function Ψ is chosen (i.e., logistic diagnostic classification model, LDCM; [57]). Its IRFs are given by
P i ( α ) = exp ( λ i 0 + λ i 1 α ) 1 + exp ( λ i 0 + λ i 1 α ) .
One recognizes that the weighted sum score i = 1 I λ i 1 X i is a sufficient statistic for α because it is a variant of the two-parameter logistic (2PL) model [58] in which the values of the trait  θ are restricted to 0 and 1. Also, note that the LCDM is a special case of von Davier’s general diagnostic model [36,59] and the Formann model [60,61,62], a very general constrained latent class model.

3.2. One-Parameter Logistic Diagnostic Classification Model

As a restriction to the two-parameter logistic DCM, Madison et al. [17] propose to constrain λ i 1 in (16) to be equal across items. The resulting 1PLDCM has the IRF
P i ( α ) = exp ( λ i 0 + λ 1 α ) 1 + exp ( λ i 0 + λ 1 α ) .
However, it can be seen that the 1PLDCM is an LCRM with two latent classes. That is, the 1PLDCM is, in fact, a Rasch model. Define the item difficulty b i = λ i 0 and the trait locations for the θ variable in the LCRM as θ 1 = 0 and θ 2 = λ 1 . Then, we obtain from (17)
P i ( θ c ) = exp ( θ c b i ) 1 + exp ( θ c b i ) .
Clearly, π 1 corresponds to the probability P ( θ = θ 2 ) , whereas π 0 = P ( θ θ 1 ) . As argued in [17], the sum score i = 1 I X i is a sufficient statistic for α . However, due to the equivalence of models  (17) and  (18), it directly follows from the fact that the 1PLDCM is an RM with a particular discrete distribution for the latent trait θ .
This property implies that researchers could test alternative assumptions regarding the trait distribution of θ in the RM. The case of two latent classes in the LCRM corresponds to the binary classification used in the 1PLDCM. However, LCRMs with more than two classes could be compared to the 1PLDCM. On the other hand, it is interesting whether a discrete located latent class representation of θ better fits the data than a (quasi-)continuous normal or skewed distribution.
We do not intend to state that the choice of using a binary classification model (i.e., the 1PLDCM) should be empirically defended against alternatives. However, we think that it is useful to specify different models in the RM that cover flexible distributions for θ .

3.3. One-Parameter Generalized Logistic Diagnostic Classification Model

As argued in Section 3.2, the choice of the link function g is irrelevant in two-parameter DCMs. However, like in the case of one-parameter IRT models, the choice of g matters if the discrimination parameter λ i 1 is constrained across items. Then, the IRF is defined as
P i ( α ) = g ( λ i 0 + λ 1 α ) .
Different choices of g can be tested against each other in terms of model fit. It should be emphasized that the item invariant ordering property is still fulfilled by the DCM in (19) (see Section 2.3). However, the sum score no longer remains a sufficient statistic for  α . A flexible estimation of the joint link function can again be employed by utilizing the generalized logistic link function Ψ α 1 , α 2 that results in the IRF
P i ( α ) = Ψ α 1 , α 2 ( λ i 0 + λ 1 α ) .
The DCM in (20) could be tested against the 1PLDCM that relies on the logistic link function.

4. Extensions of Diagnostic Classification Models to Mixed and Partial Membership

Researchers have frequently questioned that the dichotomous classification into masters and non-masters in DCMs is empirically not always tenable [63,64,65]. The crucial assumption is that there is crisp membership in DCMs; that is, students can only belong to the class α = 0 or to the class α = 1 . Mixed membership or grade of membership models weaken this assumption [66,67]. In these models, students are allowed to switch classes (i.e., the mastery and the non-mastery state) across items [68,69]. In this section, the relationships of the DCM and the RM with the DCM extensions to mixed, partial, and probabilistic membership are discussed.
Note that the binary skill α { 0 , 1 } is replaced by a latent variable α * [ 0 , 1 ] that indicates the degree to which a student belongs to the mastery or to the non-mastery class. This single variable suffices because mixed membership of only two classes must be represented. In this case, α * quantifies the degree to which a student belongs to the mastery class α = 1 , whereas 1 α * characterizes the degree to belong to class α = 0 . Note that the bounded latent membership variable α * could be equivalently represented as an unbounded latent variable θ * assuming some injective differentiable transformation function h such that (see, e.g., [19])
α * = h ( θ * ) .
In this article, the logistic normal distribution [70] is utilized for α * . In this distribution, the random variable  θ * is normally distribution with mean μ * and standard deviation σ * and the bounded membership variable is defined by the transformation
α * = Ψ ( θ * ) .
Note that choosing a very large standard deviation for θ * (e.g., σ * = 1000 ) corresponds to a membership variable α * whose values are concentrated at nearly 0 or 1. That is, the crisp membership utilized in DCMs is obtained as a special case.
In the next section, three different types of membership that can be employed for extending DCMs are discussed. We confine ourselves to only considering logistic link functions. Furthermore, the consequences of applying mixed membership extensions of DCMs are only explored for one-parameter models.

4.1. Mixed Membership Diagnostic Classification Model

IRFs for mixed membership models are defined using a weighted sum of item response probabilities from crisp membership weighted by the membership values associated with a respective latent class [19,71]. The general IRF in mixed membership case as a generalization to the case of a binary skill is given as
P i ( α * ) = P ( X i = 1 | α * ) = P ( X i = 1 | α = 0 ) ( 1 α * ) + P ( X i = 1 | α = 1 ) α * .
Inserting the probabilities of the 1PLDCM that relies on the logistic link function into (23), we obtain
P i ( α * ) = exp ( λ i 0 ) 1 + exp ( λ i 0 ) ( 1 α * ) + exp ( λ i 0 + λ 1 ) 1 + exp ( λ i 0 + λ 1 ) α * .
Note that (24) can be simplified to
P i ( α * ) = exp ( λ i 0 ) 1 + exp ( λ i 0 ) + exp ( λ i 0 + λ 1 ) 1 + exp ( λ i 0 + λ 1 ) exp ( λ i 0 ) 1 + exp ( λ i 0 ) α * .
It seems that the sum score i = 1 I X i is no longer a sufficient statistic for α * . Also, note that (25) reduces to the 1PLDCM given in (17) if α * would only take values 0 or 1.

4.2. Partial Membership Diagnostic Classification Model

Partial membership also weakens the assumption of a crisp membership but defines the IRF differently. In this case, the IRF is given as a weighted harmonic mean instead of the weighted sum in the mixed membership case (see (23)). In more detail, the IRF is given by (see [71,72])
P ( X i = x | α * ) = 1 C i ( α * ) P ( X i = x | α = 0 ) ( 1 α * ) P ( X i = x | α = 1 ) α * for x = 0 , 1 ,
where the normalization constant C i ( α * ) ensures that P ( X i = 0 | α * ) + P ( X i = 1 | α * ) = 1 . By inserting the logistic link function into (26), one obtains the IRF
P ( X i = x | α * ) = 1 C i ( α * ) exp x λ i 0 ( 1 α * ) 1 + exp ( λ i 0 ) ( 1 α * ) exp x ( λ i 0 + λ 1 ) α * 1 + exp ( λ i 0 + λ 1 ) α * .
Now, define another C ˜ i ( α * ) in (27) in order to simplify the term to
P ( X i = x | α * ) = 1 C ˜ i ( α * ) exp x λ i 0 ( 1 α * ) exp x ( λ i 0 + λ 1 ) α * .
By using the relationship
exp ( λ i 0 ) ( 1 α * ) exp ( λ i 0 + λ 1 ) α * = exp ( λ i 0 + λ 1 α * ) ,
one finally gets from (28)
P ( X i = x | α * ) = 1 C ˜ i ( α * ) exp x ( λ i 0 + λ 1 α * ) = exp x ( λ i 0 + λ 1 α * ) 1 + exp ( λ i 0 + λ 1 α * ) .
Hence, the partial membership extension of the 1PLDCM is effectively an RM with a bounded distribution on [ 0 , 1 ] (set θ = α * ). Therefore, one may view (30) as another variety to test against the normal distribution assumption of θ in the RM. Notably, the sum score is a sufficient statistic for α * .

4.3. Probabilistic Membership Diagnostic Classification Model

Finally, researchers weakened the dichotomous nature of α by introducing a bounded variable α * as a function of an underlying continuous variable θ * (see [73,74,75,76]). That is, they directly defined α * = Ψ ( θ * ) or some linear transformation of θ * (but this does not affect the following reasoning). It was reasoned that this definition was justified by fuzzy logic [73] or probabilistic logic [76]. Hence, the corresponding model is also referred to as a probabilistic membership model. An extension of the 1PLDCM results by replacing the binary variable α with the bounded continuous variable α * in the IRF. One obtains
P ( X i = x | α * ) = exp x ( λ i 0 + λ 1 α * ) 1 + exp ( λ i 0 + λ 1 α * ) .
Obviously, the same expression for the IRF as for the partial membership model (see (30)) is obtained, and the partial and the probabilistic membership models are, therefore, equivalent. The probabilistic membership model is an RM, and the sum score is a sufficient statistic for  α * .

5. Numerical Illustration

In this section, we specified the one-parameter models discussed in this article for six publicly available datasets. All rectangular datasets have N subjects (i.e., rows) and I items (i.e., columns) and contain dichotomous values 0 and 1. The datasets data.read ( N = 328 , I = 12 ), data.pisaMath ( N = 565 , I = 11 ), data.pisaRead ( N = 623 , I = 12 ), and data.trees ( N = 387 , I = 15 ; [77]) are included in the R [26] package sirt [28]. The datasets data.numeracy ( N = 876 , I = 15 ) and data.ecpe ( N = 2922 , I = 28 ; [11,78]) can be found in the R packages TAM [79] and CDM [15], respectively. All datasets did not contain missing values.
Eleven different analysis models were specified. The Rasch model was fitted using a normal distribution (“NO”) and a skewed distribution (“SK”; [37]). Furthermore, we specified a one-parameter IRT model using the generalized logistic link function with a normal distribution for θ (“GL”). Also, we specified LCRMs with 2, 3, 4, and 5 classes (resulting in models “LCRM2”, “LCRM3”, “LCRM4”, and “LCRM5”). Note that LCRM2 coincides with the 1PLDCM. The LCRM2 was also estimated with a generalized logistic link function (“GLLC2”). Furthermore, the mixed, partial, and probabilistic extensions of the 1PLDM ( = LCRM2) were specified (denoted by “MMLC2”, “PMLC2”, and “PRLC2”). The logistic normal distribution was chosen for the bounded variable α * in the last three models.
The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were used for evaluating relative model fit differences. The RM with a normal distribution (model NO) was taken as a reference for reporting AIC differences. All item response models were estimated using the sirt::xxirt() function in the sirt [28] package. The model estimation always used 100 EM iterations initially and switched afterward to Newton–Raphson optimization. Replication material can be found at https://osf.io/kfcdb/?view_only=2073df46d35f44a5bcb5dd9cc77013e5 (accessed on 15 September 2023).
In Table 1, the AIC and BIC differences of the eleven analysis models for the six datasets are shown. Overall, model comparisons turned out to be similar based on the AIC and BIC values. The RM with the skewed distribution (SK) outperformed the RM with the normal distribution (NO) in three of the six datasets (i.e., for ecpe, numeracy, and pisaRead). This means that the normal distribution assumption for the latent trait θ is violated. For all datasets, the LCRM2 (i.e., the 1PLDCM) was inferior to the RM with a normal distribution (NO). This finding implies that a normal distribution assumption for θ was more reasonable than a two-point distribution. For all datasets except for pisaRead, the latent class model with two classes based on the generalized logistic link function (GLLC2) outperformed the latent class model using the logistic link function (LCRM2). Interestingly, for four datasets, the LCRM with four or five classes fit the data better than model NO in terms of AIC differences. However, in these cases, a Rasch model with a skewed distribution (SK) had a comparable fit with the best-fitting LCRMs.
All one-parameter DCM extensions of LCRM2 improved the fit. However, mixed membership (MMLC2) always performed worse than partial membership (PMLC2) DCMs. As expected from the derivation in Section 4.2, probabilistic (PRLC2) and partial (PMLC2) membership models resulted in a nearly identical fit.
To conclude, the empirical findings demonstrate that the DCM extensions to multiple or partial membership can be seen as alternative IRT models with a continuous latent trait that had a competitive fit. They only assume a bounded distribution for the ability variable θ . Depending on the data, such a more flexible distribution could be desired to model deviations from a normal distribution of the trait.

6. Discussion

A recent article introduced the 1PLDCM as a particular case of the logistic DCM [17]. We have shown in this article that this model can be seen as a latent class Rasch model with two located latent classes. The binary classification typically conducted in DCMs can then be tested against latent class Rasch models with a larger number of latent classes, searching for a more appropriate classification of subjects. Notably, the Rasch model possesses the desirable property that the sum score is a sufficient statistic for the latent trait θ as well as for the dichotomous skill α in the 1PLDCM. More general one-parameter IRT models using the generalized logistic link function were discussed that lose the sufficiency property of the sum score but still fulfill the invariant item ordering property. We also investigated mixed, partial, and probabilistic membership extensions of the 1PLDCM. It was shown that the partial and probabilistic membership models analytically coincided and were equivalent to the Rasch model with a bounded trait distribution. Hence, they also share the statistical properties of the Rasch model. However, it is vital that the mixed membership DCM extension is not a Rasch model.
We also applied the discussed one-parameter models to six datasets. Unsurprisingly, the 1PLDCM had the worst fit in all datasets. Nevertheless, the fit could be improved by increasing the number of latent classes in the Rasch model to four or five. However, we think DCMs are often primarily chosen not because of their good model fit but due to their more interpretable classification of students instead of only providing a continuous ability distribution as the model result.
The relations among the Rasch model and DCMs in models with only one latent variable or one skill were discussed, respectively. The findings transfer to multidimensional Rasch models or DCMs in the case of a simple loading structure (i.e., each item loads on only one dimension). In the case of a complex loading structure, the relationships among the different DCM extensions are more complex. Future research can investigate the relationship between multidimensional compensatory and noncompensatory IRT models [80,81,82] and mixed and partial membership extensions [19] for complex loading structures. It would be interesting to determine whether interpreting of θ as multiple continuous and real-valued latent variables in multidimensional IRT models offers disadvantages compared to mixed or partial membership representations of bounded abilities α * between 0 and 1 in DCMs. Furthermore, future research could also investigate whether empirical datasets better fit the partial instead of the mixed membership DCM for multiple skills.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
1PLone-parameter logistic
1PLDCMone-parameter logistic diagnostic classification model
2PLtwo-parameter logistic
AICAkaike information criterion
BICBayesian information criterion
DCMdiagnostic classification model
EMexpectation maximization
GDINAgeneralized deterministic inputs, noisy “and” gate
IRFitem response function
IRTitem response theory
LCRMlatent class Rasch model
LDCMlogistic diagnostic classification model
MMLmarginal maximum likelihood
RMRasch model

References

  1. DiBello, L.V.; Roussos, L.A.; Stout, W. A review of cognitively diagnostic assessment and a summary of psychometric models. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 979–1030. [Google Scholar] [CrossRef]
  2. Maris, E. Estimating multiple classification latent class models. Psychometrika 1999, 64, 187–212. [Google Scholar] [CrossRef]
  3. Ravand, H.; Baghaei, P. Diagnostic classification models: Recent developments, practical issues, and prospects. Int. J. Test. 2020, 20, 24–56. [Google Scholar] [CrossRef]
  4. Rupp, A.A.; Templin, J.L. Unique characteristics of diagnostic classification models: A comprehensive review of the current state-of-the-art. Meas. Interdiscip. Res. Persp. 2008, 6, 219–262. [Google Scholar] [CrossRef]
  5. von Davier, M.; DiBello, L.; Yamamoto, K.Y. Reporting Test Outcomes with Models for Cognitive Diagnosis; Research Report No. RR-06-28; Educational Testing Service: Princeton, NJ, USA, 2006. [Google Scholar] [CrossRef]
  6. Zhang, S.; Liu, J.; Ying, Z. Statistical applications to cognitive diagnostic testing. Annu. Rev. Stat. Appl. 2023, 10, 651–675. [Google Scholar] [CrossRef]
  7. Bock, R.D.; Moustaki, I. Item response theory in a general framework. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 469–513. [Google Scholar] [CrossRef]
  8. Cai, L.; Choi, K.; Hansen, M.; Harrell, L. Item response theory. Annu. Rev. Stat. Appl. 2016, 3, 297–321. [Google Scholar] [CrossRef]
  9. Chen, Y.; Li, X.; Liu, J.; Ying, Z. Item response theory—A statistical framework for educational and psychological measurement. arXiv 2021, arXiv:2108.08604. [Google Scholar]
  10. Chang, H.H.; Wang, C.; Zhang, S. Statistical applications in educational measurement. Annu. Rev. Stat. Appl. 2021, 8, 439–461. [Google Scholar] [CrossRef]
  11. Templin, J.; Hoffman, L. Obtaining diagnostic classification model estimates using Mplus. Educ. Meas. 2013, 32, 37–50. [Google Scholar] [CrossRef]
  12. de la Torre, J.; van der Ark, L.A.; Rossi, G. Analysis of clinical data from a cognitive diagnosis modeling framework. Meas. Eval. Couns. Dev. 2018, 51, 281–296. [Google Scholar] [CrossRef]
  13. Ma, W.; de la Torre, J. GDINA: An R package for cognitive diagnosis modeling. J. Stat. Softw. 2020, 93, 1–26. [Google Scholar] [CrossRef]
  14. Ma, W. Cognitive diagnosis modeling using the GDINA R package. In Handbook of Diagnostic Classification Models; von Davier, M., Lee, Y.S., Eds.; Springer: Cham, Switzerland, 2019; pp. 593–601. [Google Scholar] [CrossRef]
  15. George, A.C.; Robitzsch, A.; Kiefer, T.; Groß, J.; Ünlü, A. The R package CDM for cognitive diagnosis models. J. Stat. Softw. 2016, 74, 1–24. [Google Scholar] [CrossRef]
  16. Robitzsch, A.; George, A.C. The R package CDM for diagnostic modeling. In Handbook of Diagnostic Classification Models; von Davier, M., Lee, Y.S., Eds.; Springer: Cham, Switzerland, 2019; pp. 549–572. [Google Scholar] [CrossRef]
  17. Madison, M.J.; Wind, S.A.; Maas, L.; Yamaguchi, K.; Haab, S. A one-parameter diagnostic classification model with familiar measurement properties. arXiv 2023, arXiv:2307.16744. [Google Scholar]
  18. Rasch, G. Probabilistic Models for Some Intelligence and Attainment Tests; Danish Institute for Educational Research: Copenhagen, Denmark, 1960. [Google Scholar]
  19. Shang, Z.; Erosheva, E.A.; Xu, G. Partial-mastery cognitive diagnosis models. Ann. Appl. Stat. 2021, 15, 1529–1555. [Google Scholar] [CrossRef]
  20. van der Linden, W.J. Unidimensional logistic response models. In Handbook of Item Response Theory, Volume 1: Models; van der Linden, W.J., Ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 11–30. [Google Scholar]
  21. Yen, W.M.; Fitzpatrick, A.R. Item response theory. In Educational Measurement; Brennan, R.L., Ed.; Praeger Publishers: Westport, CT, USA, 2006; pp. 111–154. [Google Scholar]
  22. San Martin, E.; Rolin, J. Identification of parametric Rasch-type models. J. Stat. Plan. Inference 2013, 143, 116–130. [Google Scholar] [CrossRef]
  23. Warm, T.A. Weighted likelihood estimation of ability in item response theory. Psychometrika 1989, 54, 427–450. [Google Scholar] [CrossRef]
  24. Bock, R.D.; Aitkin, M. Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika 1981, 46, 443–459. [Google Scholar] [CrossRef]
  25. Aitkin, M. Expectation maximization algorithm and extensions. In Handbook of Item Response Theory, Vol. 2: Statistical Tools; van der Linden, W.J., Ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 217–236. [Google Scholar] [CrossRef]
  26. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria. 2023. Available online: https://www.R-project.org/ (accessed on 15 March 2023).
  27. Chalmers, R.P. mirt: A multidimensional item response theory package for the R environment. J. Stat. Softw. 2012, 48, 1–29. [Google Scholar] [CrossRef]
  28. Robitzsch, A. Sirt: Supplementary Item Response Theory Models. R Package Version 4.0-17. 2023. Available online: https://github.com/alexanderrobitzsch/sirt (accessed on 29 August 2023).
  29. Fischer, G.H. Rasch models. In Handbook of Statistics, Vol. 26: Psychometrics; Rao, C.R., Sinharay, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; pp. 515–585. [Google Scholar] [CrossRef]
  30. Andrich, D.; Marais, I. A Course in Rasch Measurement Theory; Springer: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  31. Boone, W.J.; Staver, J.R.; Yale, M.S. Rasch Analysis in the Human Sciences; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  32. Bond, T.; Yan, Z.; Heene, M. Applying the Rasch Model; Routledge: New York, NY, USA, 2020. [Google Scholar]
  33. Kubinger, K.D. Psychological test calibration using the Rasch model—Some critical suggestions on traditional approaches. Int. J. Test. 2005, 5, 377–394. [Google Scholar] [CrossRef]
  34. Linacre, J.M. Understanding Rasch measurement: Estimation methods for Rasch measures. J. Outcome Meas. 1999, 3, 382–405. Available online: https://bit.ly/2UV6Eht (accessed on 28 August 2023).
  35. Wind, S.A.; Engelhard, G. How invariant and accurate are domain ratings in writing assessment? Assess. Writ. 2013, 18, 278–299. [Google Scholar] [CrossRef]
  36. von Davier, M. A general diagnostic model applied to language testing data. Brit. J. Math. Stat. Psychol. 2008, 61, 287–307. [Google Scholar] [CrossRef] [PubMed]
  37. Xu, X.; von Davier, M. Fitting the Structured General Diagnostic Model to NAEP Data; Research Report No. RR-08-28; Educational Testing Service: Princeton, NJ, USA, 2008. [Google Scholar] [CrossRef]
  38. De Leeuw, J.; Verhelst, N. Maximum likelihood estimation in generalized Rasch models. J. Educ. Behav. Stat. 1986, 11, 183–196. [Google Scholar] [CrossRef]
  39. Formann, A.K. Constrained latent class models: Theory and applications. Brit. J. Math. Stat. Psychol. 1985, 38, 87–111. [Google Scholar] [CrossRef]
  40. Haberman, S.J. Latent-Class Item Response Models; Research Report No. RR-05-28; Educational Testing Service: Princeton, NJ, USA, 2005. [Google Scholar] [CrossRef]
  41. Lindsay, B.; Clogg, C.C.; Grego, J. Semiparametric estimation in the Rasch model and related exponential response models, including a simple latent class model for item analysis. J. Am. Stat. Assoc. 1991, 86, 96–107. [Google Scholar] [CrossRef]
  42. Bacci, S.; Bartolucci, F. A multidimensional latent class Rasch model for the assessment of the health-related quality of life. In Rasch Models in Health; Christensen, K.B., Kreiner, S., Mesbah, M., Eds.; Wiley: Hoboken, NJ, USA, 2013; pp. 197–218. [Google Scholar] [CrossRef]
  43. Robitzsch, A. A comprehensive simulation study of estimation methods for the Rasch model. Stats 2021, 4, 814–836. [Google Scholar] [CrossRef]
  44. Goldstein, H. Consequences of using the Rasch model for educational assessment. Br. Educ. Res. J. 1979, 5, 211–220. [Google Scholar] [CrossRef]
  45. Scheiblechner, H. Additive conjoint isotonic probabilistic models (ADISOP). Psychometrika 1999, 64, 295–316. [Google Scholar] [CrossRef]
  46. Agresti, A. Categorical Data Analysis; John Wiley & Sons: New York, NY, USA, 2012; Volume 792. [Google Scholar] [CrossRef]
  47. Shim, H.; Bonifay, W.; Wiedermann, W. Parsimonious asymmetric item response theory modeling with the complementary log-log link. Behav. Res. Methods 2023, 55, 200–219. [Google Scholar] [CrossRef]
  48. Shim, H.; Bonifay, W.; Wiedermann, W. Parsimonious item response theory modeling with the negative log-log link: The role of inflection point shift. Behav. Res. Methods 2023, epub ahead of print. [Google Scholar] [CrossRef]
  49. Stukel, T.A. Generalized logistic models. J. Am. Stat. Assoc. 1988, 83, 426–431. [Google Scholar] [CrossRef]
  50. Zhang, J.; Zhang, Y.Y.; Tao, J.; Chen, M.H. Bayesian item response theory models with flexible generalized logit links. Appl. Psychol. Meas. 2022, 46, 382–405. [Google Scholar] [CrossRef] [PubMed]
  51. Robitzsch, A. On the choice of the item response model for scaling PISA data: Model selection based on information criteria and quantifying model uncertainty. Entropy 2022, 24, 760. [Google Scholar] [CrossRef] [PubMed]
  52. Robitzsch, A. Regularized generalized logistic item response model. Information 2023, 14, 306. [Google Scholar] [CrossRef]
  53. Wang, X.; Lu, J.; Zhang, J. Exploration and analysis of a generalized one-parameter item response model with flexible link functions. Front. Psychol. 2023, 14, 1248454. [Google Scholar] [CrossRef] [PubMed]
  54. Dayton, C.M.; Macready, G.B. A probabilistic model for validation of behavioral hierarchies. Psychometrika 1976, 41, 189–204. [Google Scholar] [CrossRef]
  55. Haertel, E.H. Using restricted latent class models to map the skill structure of achievement items. J. Educ. Meas. 1989, 26, 301–321. [Google Scholar] [CrossRef]
  56. de la Torre, J. The generalized DINA model framework. Psychometrika 2011, 76, 179–199. [Google Scholar] [CrossRef]
  57. Henson, R.A.; Templin, J.L.; Willse, J.T. Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika 2009, 74, 191–210. [Google Scholar] [CrossRef]
  58. Birnbaum, A. Some latent trait models and their use in inferring an examinee’s ability. In Statistical Theories of Mental Test Scores; Lord, F.M., Novick, M.R., Eds.; MIT Press: Reading, MA, USA, 1968; pp. 397–479. [Google Scholar]
  59. von Davier, M. Mixture Distribution Diagnostic Models; Research Report No. RR-07-32; Educational Testing Service: Princeton, NJ, USA, 2007. [Google Scholar] [CrossRef]
  60. Formann, A.K. Linear logistic latent class analysis. Biom. J. 1982, 24, 171–190. [Google Scholar] [CrossRef]
  61. Formann, A.K. Linear logistic latent class analysis for polytomous data. J. Am. Stat. Assoc. 1992, 87, 476–486. [Google Scholar] [CrossRef]
  62. Formann, A.K.; Kohlmann, T. Structural latent class models. Sociol. Methods Res. 1998, 26, 530–565. [Google Scholar] [CrossRef]
  63. de la Torre, J.; Lee, Y.S. A note on the invariance of the DINA model parameters. J. Educ. Meas. 2010, 47, 115–127. [Google Scholar] [CrossRef]
  64. Huang, Q.; Bolt, D.M. Relative robustness of CDMs and (M)IRT in measuring growth in latent skills. Educ. Psychol. Meas. 2023, 83, 808–830. [Google Scholar] [CrossRef] [PubMed]
  65. Ma, W.; Chen, J.; Jiang, Z. Attribute continuity in cognitive diagnosis models: Impact on parameter estimation and its detection. Behaviormetrika 2023, 50, 217–240. [Google Scholar] [CrossRef]
  66. Erosheva, E.A. Comparing latent structures of the grade of membership, Rasch, and latent class models. Psychometrika 2005, 70, 619–628. [Google Scholar] [CrossRef]
  67. Woodbury, M.A.; Clive, J.; Garson, A., Jr. Mathematical typology: A grade of membership technique for obtaining disease definition. Comput. Biomed. Res. 1978, 11, 277–298. [Google Scholar] [CrossRef]
  68. Erosheva, E.A.; Fienberg, S.E.; Junker, B.W. Alternative statistical models and representations for large sparse multi-dimensional contingency tables. Ann. Fac. Sci. Toulouse Math. 2002, 11, 485–505. [Google Scholar] [CrossRef]
  69. Erosheva, E.A.; Fienberg, S.E.; Joutard, C. Describing disability through individual-level mixture models for multivariate binary data. Ann. Appl. Stat. 2007, 1, 346–384. [Google Scholar] [CrossRef]
  70. Paisley, J.; Wang, C.; Blei, D.M. The discrete infinite logistic normal distribution. Bayesian Anal. 2012, 7, 997–1034. [Google Scholar] [CrossRef]
  71. Gruhl, J.; Erosheva, E.A.; Ghahramani, Z.; Mohamed, S.; Heller, K. A tale of two (types of) memberships: Comparing mixed and partial membership with a continuous data example. In Handbook of Mixed Membership Models and Their Applications; Airoldi, E.M., Blei, D., Erosheva, E.A., Fienberg, S.E., Eds.; Chapman & Hall: Boca Raton, FL, USA, 2014; pp. 15–38. [Google Scholar]
  72. Ghahramani, Z.; Mohamed, S.; Heller, K. A simple and general exponential family framework for partial membership and factor analysis. In Handbook of Mixed Membership Models and Their Applications; Airoldi, E.M., Blei, D., Erosheva, E.A., Fienberg, S.E., Eds.; Chapman & Hall: Boca Raton, FL, USA, 2014; pp. 101–122. [Google Scholar]
  73. Liu, Q.; Wu, R.; Chen, E.; Xu, G.; Su, Y.; Chen, Z.; Hu, G. Fuzzy cognitive diagnosis for modelling examinee performance. ACM Trans. Intell. Syst. Technol. 2018, 9, 1–26. [Google Scholar] [CrossRef]
  74. Shu, T.; Luo, G.; Luo, Z.; Yu, X.; Guo, X.; Li, Y. An explicit form with continuous attribute profile of the partial mastery DINA model. J. Educ. Behav. Stat. 2023, 48, 573–602. [Google Scholar] [CrossRef]
  75. Zhan, P.; Wang, W.C.; Jiao, H.; Bian, Y. Probabilistic-input, noisy conjunctive models for cognitive diagnosis. Front. Psychol. 2018, 9, 997. [Google Scholar] [CrossRef] [PubMed]
  76. Zhan, P. Refined learning tracking with a longitudinal probabilistic diagnostic model. Educ. Meas. 2021, 40, 44–58. [Google Scholar] [CrossRef]
  77. Stoyan, D.; Pommerening, A.; Wünsche, A. Rater classification by means of set-theoretic methods applied to forestry data. J. Environ. Stat. 2018, 8, 1–17. Available online: http://www.jenvstat.org/v08/i02 (accessed on 28 August 2022).
  78. Templin, J.; Bradshaw, L. Hierarchical diagnostic classification models: A family of models for estimating and testing attribute hierarchies. Psychometrika 2014, 79, 317–339. [Google Scholar] [CrossRef] [PubMed]
  79. Robitzsch, A.; Kiefer, T.; Wu, M. TAM: Test Analysis Modules. 2022. R Package Version 4.1-4. Available online: https://CRAN.R-project.org/package=TAM (accessed on 28 August 2022).
  80. Bolt, D.M.; Lall, V.F. Estimation of compensatory and noncompensatory multidimensional item response models using Markov chain Monte Carlo. Appl. Psychol. Meas. 2003, 27, 395–414. [Google Scholar] [CrossRef]
  81. Chalmers, R.P. Partially and fully noncompensatory response models for dichotomous and polytomous items. Appl. Psychol. Meas. 2020, 44, 415–430. [Google Scholar] [CrossRef]
  82. Reckase, M.D. Multidimensional Item Response Theory Models; Springer: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
Table 1. AIC differences ( Δ AIC ) and BIC differences ( Δ BIC ) for eleven analysis models for six different datasets.
Table 1. AIC differences ( Δ AIC ) and BIC differences ( Δ BIC ) for eleven analysis models for six different datasets.
Datasets
read ecpe numeracy pisaMath pisaRead trees
Δ AIC Δ BIC Δ AIC Δ BIC Δ AIC Δ BIC Δ AIC Δ BIC Δ AIC Δ BIC Δ AIC Δ BIC
NO 00 00 00 00 00 00
SK−2−6 5549 3126 −1−6 1410 −2−6
LCRM2−18−21 −777−782 −247−251 −63−67 −89−93 −38−42
GLLC2−8−20 −739−757 −221−235 −32−45 −92−105 −21−33
LCRM3−2−13 −82−100 −43−58 7−6 −9−22 −4−16
LCRM4−3−22 4717 3612 3−18 6−16 −5−25
LCRM5−7−34 486 351 3−27 6−25 −9−37
PRLLC22−6 4230 3525 6−3 134 −2−10
PMLLC22−6 4230 3425 7−1 134 −2−10
MMLLC2−14−22 −226−238 −6−16 −8−17 −62−71 0−8
GL80 2−10 90 167 0−8 80
Note. NO = Rasch model with normal distribution; SK = Rasch model with skewed distribution; LCRMc = latent class Rasch model with c = 2, 3, 4, 5 located latent classes; GLLC2 = 1PL in two latent classes with generalized logistic link function; PRLC2 = probabilistic membership in two latent classes; PMLC2 = probabilistic membership in two latent classes; MMLC2 = probabilistic membership in two latent classes; GL = Rasch model with generalized logistic distribution. The model LCRM2 corresponds to the one-parameter logistic diagnostic classification model (1PLDCM). † The model NO was used as the reference model to compute AIC differences. Positive values indicate a better-fitting model and are printed in bold font.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Robitzsch, A. Relating the One-Parameter Logistic Diagnostic Classification Model to the Rasch Model and One-Parameter Logistic Mixed, Partial, and Probabilistic Membership Diagnostic Classification Models. Foundations 2023, 3, 621-633. https://doi.org/10.3390/foundations3030037

AMA Style

Robitzsch A. Relating the One-Parameter Logistic Diagnostic Classification Model to the Rasch Model and One-Parameter Logistic Mixed, Partial, and Probabilistic Membership Diagnostic Classification Models. Foundations. 2023; 3(3):621-633. https://doi.org/10.3390/foundations3030037

Chicago/Turabian Style

Robitzsch, Alexander. 2023. "Relating the One-Parameter Logistic Diagnostic Classification Model to the Rasch Model and One-Parameter Logistic Mixed, Partial, and Probabilistic Membership Diagnostic Classification Models" Foundations 3, no. 3: 621-633. https://doi.org/10.3390/foundations3030037

Article Metrics

Back to TopTop