What Constitutes Significant Differences in Evaluating Measurement Invariance?  |
  | Cheung, Gordon W  | Chinese U. of Hong Kong  | gordonc@cuhk.edu.hk  | (852)-2609-7778  |
| More researchers are using Multi-Group Confirmatory Factor Analysis (MGCFA) to test for measurement invariance because they are concern about whether the scales used to measure constructs operate equally well across groups. In addition to testing measurement invariance, researchers are using MGCFA to compare construct means, variances, and covariances across groups. This paper attempts to advance the tests for measurement invariance and construct equivalence using MGCFA by proposing a framework that shows the sequence of tests researchers should follow.
Problems of current methods in assessing invariance hypotheses are discussed. A Monte Carlo Simulation was performed to assess the effects of sampling error and model characteristics on changes in CFI, TLI, and RMSEA when invariance constraints are added to a model. Results show that models with greater number of items per factor and greater number of factors are associated with lower values of CFI and TLI. However, none of the model specifications (number of items per factor, number of factors, factor loadings, factor variance, factor correlations, and sample size) have significant effect on the changes in CFI, TLI, and RMSEA when testing invariance hypotheses.
Finally, it was found that changes in CFI is a robust test of invariance hypotheses and it was proposed that if changes in CFI was less than or equal to -.01, then the invariance hypotheses should be retained. When the changes in CFI was greater than -.02, then the model with invariance constraints probably resulted in a significant poorer fit than the unconstrained model.
|
| Keywords: Measurement Invariance; Fit indices; Structural equation modeling |
Using Confirmatory Factor Analysis of Correlated Uniquenesses to Estimate Method Variance in Multitrait-Multimethod Matrices  |
  | Scullen, Steven E.  | North Carolina State U.  | steve_scullen@ncsu.edu  | 919-515-9387  |
| Considerable research attention has been given to measuring method variance, much of it aimed in some way at assessing
its effects on construct validity (Doty & Glick, 1998). Traditional confirmatory factor analysis (CFA) of multitrait-multimethod
(MTMM) matrices has been a popular method for conducting such research, but CFA models very often exhibit significant
estimation problems. The correlated uniquenesses (CU) model generally eliminates CFA's estimation problems, but until
Conway (1998) no method had been developed for using the CU method to estimate method variance. Conway argued that
the average of the correlated uniquenesses can be used to estimate the average method variance. This paper extends
Conway's method in two ways. First, it generalizes the logic behind Conway's method, and shows that the method produces
downwardly biased estimates of the correct values. Factors affecting the accuracy of estimates produced by Conway's method
are discussed. The method's applicability to a broader range of MTMM matrices, especially those with more than 3 traits, is
also examined. Second, this paper presents a new method, based on confirmatory factor analysis of the covariance matrix
of uniquenesses, for examining method variance. The new method offers two important advantages over Conway's method.
One is that it provides more precise and unbiased estimates of the average method variance associated with each
measurement method. The other is that it is the first method that allows the researcher to take advantage of the superior
estimation properties of the CU model in estimating the amount of method variance in individual measures.
|
| Keywords: method variance; multitrait-multimethod matrix; correlated uniqueness |
Effects of Model Parsimony and Sampling Error on the Fit of Structural Equation Models  |
  | Cheung, Gordon W  | Chinese U. of Hong Kong  | gordonc@cuhk.edu.hk  | (852)-2609-7778  |
  | Rensvold, Roger B.  | City U. of Hong Kong  | mgrr@cityu.edu.hk  | (852)-2788-7857  |
| The fit between a structural equations model (SEM) and a data set is usually operationalized as the value of a fit index. The difference between that value and the value indicating perfect fit can be thought of as fit error. Fit error has three sources; mis-specification, error arising from theoretical parsimony in the description of the model (parsimony error), and sampling error. Mis-specification, which represents a disparity between "real-world" relationships and causal paths in the model, is the most important source of error for researchers. It cannot be accurately assessed, however, unless parsimony error and sampling error are taken into account.
Parsimony error occurs when non-significant secondary factor loadings and correlations among error terms are excluded from the model. Although theoretically uninteresting and small, these terms are usually non-zero, and excluding them contributes to fit error. A simulation was conducted to examine the effects of parsimony error on perfect models, and to establish criterion values for fit indices.
Sampling error is ubiquitous, in that any fit index obtained using a sample differs from one that would be obtained using the entire population. A bootstrap simulation generated sampling distributions of fit indices under the assumption that the previously established criterion values represented the population fit. The results are used to address two related questions: What is "good" fit? Given a good fit in the population, what is the probability of obtaining a particular value of a fit index, given a random sample of size N? |
| Keywords: Goodness-of-fit indices; Bootstrap procedure; Structural equation models |