Session Summary

Session Number:339
Session ID:S1222
Session Title:Selection and Performance Appraisal
Short Title:Selection & Appraisal
Session Type:Interactive Paper
Hotel:Hyatt East
Floor:LL3
Room:Wacker West (1)
Time:Tuesday, August 10, 1999 8:30 AM - 10:10 AM

Sponsors

GDO  (Audrey Murrell)amurrell@katz.business.pitt.edu (412) 648-1651 
HR  (Lynn Shore)mgtlms@langate.gsu.edu (404) 651-3038 
MOC  (Kathleen Sutcliffe)ksutclif@umich.edu (734) 764-2312 

General People

Facilitator Thomas, Kecia M. U. of Georgia kthomas@arches.uga.edu 706-542-0057 

Submissions

The Role of Social Cognition on Rater Evaluations of Job Applicants: When do Interviewers Adjust Ratings to Account for Situational Influences? 
 Chapman, Derek S. U. of Waterloo dschapma@watarts.uwaterloo.ca (519) 888-4567 ext. 3786 
 Webster, Jane  U. of Waterloo jwebster@mansci2.uwaterloo.ca (519)-885-1211 
 This study examined how rater cognitive processes affected applicant evaluations within a framework of social hypothesis testing. We also explored the role of rater personality traits on their evaluations of applicants. Evaluations provided by eighty raters indicated that their evaluations of applicants were influenced by rater personalities and by cognitive bias corrections. Raters demonstrated a default naive theory that videoconference-based applicants were disadvantaged by the communication medium used to conduct the interview. As a result of this perceived disadvantage, raters adjusted their ratings in favor of those videoconference-based applicants, who they believed were disadvantaged, to account for their perceived biases. Raters' scores on the big five personality factors played a minor role in rater evaluations of candidates. Raters who scored higher on conscientiousness rated candidates more favorably. Implications for theories of selection decisions are discussed. Ramifications of using videoconference technologies for interviewing applicants at a distance are also discussed.
 Keywords: Selection Interview; Decision Making; Videoconference
Frame of Reference Training With Multisource Raters: A Field Study 
 Tyler, Catherine L. Florida Atlantic University kltyler@yahoo.com (941) 437-5856 
 Bernardin, H. John Florida Atlantic University bernardi@fau.edu (561) 297-3640 
 In the area of performance appraisals, Frame of Reference (FOR) training has received a great deal of research attention. However, previous studies of FOR training effectiveness have been conducted in laboratory settings using students. The present study examines managers in work settings based on actual work performance. Another increasingly researched area is the 360 degree performance appraisal process, where multiple raters (peers, subordinates, superiors, and self) all rate performance. This study utilized both self and peer ratings to determine the degree of congruence between raters. This study further contributes to the body of knowledge about performance appraisal training by examining variation in self-other ratings as a function of receiving FOR training. Both self- and peer-ratings were compared for managers who received FOR training and managers who did not receive training. While timing of FOR training (before or after the appraisal period) did not result in significant differences in congruence of self-other scores, FOR training did result in a significant increase in accuracy of ratings among the experimental group that received FOR training. Possible explanations for the mixed results are discussed.
 Keywords: Frame of Reference training; performance appraisal; 360 degree appraisal
Bias, Error, and Favoritism in Performance Ratings: Motivational, Socio-Cultural, and Cognitive Processes 
 Smith, D. Randall Rutgers U., New Brunswick drasmith@rci.rutgers.edu (732)-445-4740 
 DiTomaso, Nancy  Rutgers U., Newark/New Brunswick ditomaso@andromeda.rutgers.edu (973)-353-5984 
 Farris, George F. Rutgers U., Newark/New Brunswick gfarris@gsmack.rutgers.edu (973)-353-5982 
 Cordero, Rene  New Jersey Institute of Technology cordero@tesla.njit.edu (973)-596-6417 
 In a study of performance ratings for a sample of 2445 scientists and engineers from 24 U.S.companies, we find more evidence for in group favoritism than for out group bias. In the analysis, we consider the separate effects of bias by rater, ratee, and rater-ratee interaction, by gender, race/ethnicity, and nativity. Because of the unique structure of the data, we are able to statistically control for psychometric error (leniency, severity, and restriction of range), and hence, remove the effects of autocorrelation from the analysis. We also control for patents and publications, which are relevant measures of performance for scientists and engineers. Our findings are consistent with a conceptual framework that links cognitive processes with motivational and socio-cultural influences, especially under conditions when there are high levels of ambivalence, when issues of inequality are salient, and when there is normative support for egalitarianism and fairness.
 Keywords: Social identity theory; Bias; Performance ratings
Perceived Similarity and Performance Rating Accuracy: A Test of the Criterial Range Model 
 Cardy, Robert L. Arizona State U., Main Robert.Cardy@asu.edu (602) 965-6445 
 Miller, Janice S. U. of Wisconsin, Milwaukee jsm@uwm.edu (414)229-2246 
 Selvarajan, T. T. Arizona State U., Main ttselva@asu.edu (602) 965-3431 
 This study focused on a measurement aspect of the performance appraisal process, providing initial empirical evidence of the relationship between level of perceived similarity among ratees and subsequent accuracy of performance ratings. The study built on earlier frame-of-reference research to extend examination of rater calibration procedures. To do so, it applied a criterial range model first proposed by Gravetter and Lockhead (1973) to the domain of performance rating accuracy. The primary feature of this model asserts that judgment accuracy is a function of the perceptual range over which a human judge or rater makes assessments. Criterial range was assessed via the degree of similarity among a set of stimuli. A laboratory study investigated whether rater error increased as the criterial range for a set of ratees increased. Second, it proposed that rating error would increase as a function of rater affect directed toward a target ratee. Finally, the study investigated whether the opportunity to categorize ratees into performance groups influenced criterial range. Results confirmed the predicted relationship between criterial range (similarity) and differential accuracy, supporting the prediction of the criterial range model in performance appraisal. Marginal support was found for the remaining two hypotheses. Direcitons for additional study are proposed that may extend applications of the criterial range concept in future performance appraisal resarch.
 Keywords: Criterial range;; Performance appraisal;; Perceived similarity
Investing in Affirmative Action: Long-Term Performance Effects of Affirmative Action Awards 
 McCormick, Blaine  Baylor U. Blaine_McCormick@Baylor.edu 254-710-4158 
 Bierman, Len  Texas A&M U. lbierman@tamu.edu 406-845-4851 
 Taylor, Beck  Baylor U. Beck_Taylor@Baylor.edu 254-710-2263 
 Recent controversial studies by Wright, Ferris, Hiller, and Kroll (1995) and Hiller and Ferris (1993) found positive immediate stock market reactions when firms win U.S. Department of Labor Afiirmative Action awards. Our research ,however, finds dissimilar results when extending their analysis into the long-term. This research thus questions the purported positive likage between exemplary firm affirmative action programs and stock performance.
 Keywords: performance; affirmative action
Personnel Selection with Incomplete Information: An Extension of the Inference Effect 
 Blesing, Kristen Marie U. of Western Australia kblesing@ecel.uwa.edu.au (618) 9380 7042 
 This study investigated personnel selection evaluations when important items of information were missing. 150 participants evaluated hypothetical job candidates for an IT management position, based on personnel test scores in management, sales and computer programming. The management test score was emphasised as being the most important. In addition, all candidates were missing at least one test score. The correlation between employees' test scores was varied between groups of subjects (-.85,.00,.85). Responses were consistent with a relative-weight averaging model in all three correlation conditions. Averaging and adding strategies were distinguished by varying the importance of one attribute independent of the others. The results were consistent with the predictions of the Inferred Information Model (Johnson and Levin, 1985) in the positive and zero correlation conditions. The prediction that the importance of the missing attribute will interact with the inference effect was strongly supported. Candidates with one test score were rated significantly lower than comparable candidates with an average score on the second test. The extent of discounting was not influenced by correlation condition. An important finding was the discovery of an inference effect, interacting with attribute importance, for two-test employees as well as for single-test employees. This confirms the generality of the inference effect, and suggests that it may extend to applied decisions with more abundant information sources.
 Keywords: Personnel Selection; Incomplete Information; Inference Effect