Session Summary

Session Number:806
Session ID:S176
Session Title:Using Monte Carlo Simulations to Answer Methodological Questions
Short Title:Simulations to Answer Method Q
Session Type:Division Paper
Hotel:Swiss
Floor:LL3
Room:Alpine I
Time:Tuesday, August 10, 1999 3:40 PM - 5:00 PM

Sponsors

RM  (Karen Golden-Biddle)karen.golden-biddle@ualberta.ca (403) 492-8901 

General People

Chair Aguinis, Herman  U. of Colorado, Denver Herman.Aguinis@cudenver.edu (303)-556-5865 
Discussant Goodman, Jodi S. Purdue U. jgoodman@mgmt.purdue.edu 765-494-4485 
Discussant Bliese, Paul D. Walter Reed Army Institute of Research bliese@wrair-emh1.army.mil (301) 295-7856 
Discussant Gully, Stanley M. Rutgers U. gully@rci.rutgers.edu (732) 445-5830 

Submissions

Missing Data in Multiple Item Scales: A Monte Carlo Analysis of Missing Data Techniques 
 Roth, Philip L. Clemson U. rothp@clemson.edu 864-656-1039 
 Switzer III, Fred S. Clemson U. switzef@clemson.edu 864-656-4980 
 Switzer, Deborah  Clemson U. switzed@clemson.edu 864-656-5098 
 Researchers in many fields use multiple item scales to measure important variables such as attitudes and personality traits, but find that some respondents failed to complete certain items. Past missing data research focuses on missing entire instruments and is of limited help because there are few variables to help impute missing scores and the variables are often not highly related to each other. Multiple item scales offer the unique opportunity to impute missing values from other correlated items designed to measure the same construct. A Monte Carlo analysis suggests that imputation techniques such as regression imputation and substituting the mean response of a person to other items on a scale leads to low levels of bias and dispersion around true scores. Further, imputation techniques often outperformed listwise deletion which is the most commonly used approach in literatures such as applied psychology and human resource management.
 Keywords: Missing Data
The Effectiveness of Methods for Analyzing Multivariate Factorial Data 
 McDonald, Robert A. State U. of New York, Albany canuck@capital.net (518) 442-4966 
 Seifert, Charles F. Siena College seifert@siena.edu (518)-782-6501 
 Lorenzet, Steven J. State U. of New York, Albany sloren@worldnet.att.net (518) 442-4386 
 Givens, Susan  State U. of New York, Albany susangiv@aol.com (518) 442-4386 
 Jaccard, James  State U. of New York, Albany jjj20@csc.albany.edu (518) 442-4864 
 A Monte Carlo simulation was used to examine the effectiveness of univariate analysis of variance (ANOVA), multivariate analysis of variance (MANOVA), and multiple indicator structural equation modeling (MISE) to analyze data from multivariate factorial designs. The research design reflected a multivariate factorial situation with four dependent variables, two factors (each with two levels), an interaction term, and a covariate. Parameter estimates for univariate and multivariate model effects obtained from the three data analytic methods were examined. In the small sample size data conditions (n = 60), the MISE method yielded downwardly biased standard errors for the univariate parameter estimates. Although the downwardly biased standard errors enhance statistical power, this occurs at the cost of Type I error rates. In the large sample size data conditions (n = 120) , the MISE method outperformed MANOVA and ANOVA when the covariate accounted for variation in the dependent variable and when both the independent and dependent variables were unreliable. In terms of multivariate statistical tests for model effects, MANOVA outperformed the MISE method in the Type I error conditions and the MISE method outperformed MANOVA in the Type II error conditions. The Bonferroni methods were overly conservative in controlling Type I error rates for univariate tests for model effects, but the modified Bonferroni method had higher statistical power than the Bonferroni method. Both the Bonferroni and modified methods adequately controlled multivariate Type I error rates.
 Keywords: Multivariate; Factorial; Data
Developing a procedure to correct for range restriction which involves both organizational selection and individual's rejection of job offers 
 Yang, Hyuckseung  U. of Southern California yang@darla.badm.sc.edu (803)-777-5979 
  Computing an unbiased estimate of predictor-criterion validity coefficient in the selection context has been one of the main research topics. Restriction of range in selection setting occurs when the sample of validation study is not randomly selected and therefore systematically different from the applicant population. The conventional correction formulas for range restriction are limited in their applicability to real settings. They require the very restrictive conditions that selection variable scores on which the administrative selection was based are clearly measured, and that all applicants above a cutoff on the selection variable are present in the sample. The first condition could be relaxed using the econometric sample selection bias model. However, this model needs to be extended to relax the second condition. In this paper a procedure called the doubly restricted model is developed which is applicable to situations in which both conditions are not fulfilled. Simulation results show that the procedure is quite effective in estimating the validity coefficient. An assumption of this procedure and a limitation of this study are discussed.
 Keywords: range; restriction; selection