R/RcppExports.R
sim_alphas.Rd
Based on the learning model parameters, create cube of attribute patterns of all subjects across time. Currently available learning models are Higher-order hidden Markov DCM('HO_sep'), Higher-order hidden Markov DCM with learning ability as a random effect('HO_joint'), the simple independent-attribute learning model('indept'), and the first order hidden Markov model('FOHM').
sim_alphas(
model,
lambdas = NULL,
thetas = NULL,
Q_matrix = NULL,
Design_array = NULL,
taus = NULL,
Omega = NULL,
N = NA_integer_,
L = NA_integer_,
R = NULL,
alpha0 = NULL
)
The learning model under which the attribute trajectories are generated. Available options are: 'HO_joint', 'HO_sep', 'indept', 'FOHM'.
A vector
of transition model coefficients. With 'HO_sep' model specification, lambdas
should be a length 4 vector
. First entry is intercept of the logistic transition
model, second entry is the slope of general learning ability, third entry is the slope for number of other mastered skills,
fourth entry is the slope for amount of practice.
With 'HO_joint' model specification, lambdas
should be a length 3 vector
. First entry is intercept of the logistic transition
model, second entry is the slope for number of other mastered skills, third entry is the slope for amount of practice.
A length N vector
of learning abilities of each subject.
A J-by-K Q-matrix
A N-by-J-by-L array indicating items administered to examinee n at time point l.
A length K vector
of transition probabilities from 0 to 1 on each skill
A 2^K-by-2^K matrix
of transition probabilities from row pattern to column pattern
An int
of number of examinees.
An int
of number of time points.
A K-by-K dichotomous reachability matrix
indicating the attribute hierarchies. The k,k'th entry of R is 1 if k' is prereq to k.
Optional. An N-by-K matrix
of subjects' initial attribute patterns.
An N-by-K-by-L array
of attribute patterns of subjects at each time point.
# \donttest{
## HO_joint ##
N = nrow(Design_array)
J = nrow(Q_matrix)
K = ncol(Q_matrix)
L = dim(Design_array)[3]
class_0 <- sample(1:2^K, N, replace = TRUE)
Alphas_0 <- matrix(0,N,K)
for(i in 1:N){
Alphas_0[i,] <- inv_bijectionvector(K,(class_0[i]-1))
}
thetas_true = rnorm(N, 0, 1.8)
lambdas_true <- c(-2, .4, .055)
Alphas <- sim_alphas(model="HO_joint",
lambdas=lambdas_true,
thetas=thetas_true,
Q_matrix=Q_matrix,
Design_array=Design_array)
## HO_sep ##
N = dim(Design_array)[1]
J = nrow(Q_matrix)
K = ncol(Q_matrix)
L = dim(Design_array)[3]
class_0 <- sample(1:2^K, N, replace = L)
Alphas_0 <- matrix(0,N,K)
for(i in 1:N){
Alphas_0[i,] <- inv_bijectionvector(K,(class_0[i]-1))
}
thetas_true = rnorm(N)
lambdas_true = c(-1, 1.8, .277, .055)
Alphas <- sim_alphas(model="HO_sep",
lambdas=lambdas_true,
thetas=thetas_true,
Q_matrix=Q_matrix,
Design_array=Design_array)
## indept ##
N = dim(Design_array)[1]
K = dim(Q_matrix)[2]
L = dim(Design_array)[3]
tau <- numeric(K)
for(k in 1:K){
tau[k] <- runif(1,.2,.6)
}
R = matrix(0,K,K)
p_mastery <- c(.5,.5,.4,.4)
Alphas_0 <- matrix(0,N,K)
for(i in 1:N){
for(k in 1:K){
prereqs <- which(R[k,]==1)
if(length(prereqs)==0){
Alphas_0[i,k] <- rbinom(1,1,p_mastery[k])
}
if(length(prereqs)>0){
Alphas_0[i,k] <- prod(Alphas_0[i,prereqs])*rbinom(1,1,p_mastery)
}
}
}
Alphas <- sim_alphas(model="indept", taus=tau, N=N, L=L, R=R)
## FOHM ##
N = dim(Design_array)[1]
K = ncol(Q_matrix)
L = dim(Design_array)[3]
TP <- TPmat(K)
Omega_true <- rOmega(TP)
class_0 <- sample(1:2^K, N, replace = L)
Alphas_0 <- matrix(0,N,K)
for(i in 1:N){
Alphas_0[i,] <- inv_bijectionvector(K,(class_0[i]-1))
}
Alphas <- sim_alphas(model="FOHM", Omega = Omega_true, N=N, L=L)
# }