Session Summary

Session Number:539
Session ID:S375
Session Title:Person - Organization Fit and Employee Selection
Short Title:Selection & Organizational Fit
Session Type:Division Paper
Hotel:Hyatt West
Floor:3
Room:Field
Time:Monday, August 09, 1999 10:40 AM - 12:00 PM

Sponsors

HR  (Lynn Shore)mgtlms@langate.gsu.edu (404) 651-3038 

General People

Chair Kacmar, K. Michele Florida State U. mkacmar@garnet.acns.fsu.edu (850) 644-7881 
Discussant Ployhart, Robert E. Michigan State U. ployhart@pilot.msu.edu (517)-353-9166 
Discussant Russell, Craig J. U. of Oklahoma cruss@ou.edu (405) - 325-2458 

Submissions

The Use of Person-Group fit for Employment Selection: A Missing Link in Person-Environment Fit 
 Werbel, James D. Iowa State U. jwerbel@iastate.edu 515-294-2717 
 This paper proposes and that person-group fit is a viable paradigm to use in selecting job applicants and offers suggestions about ways to use person-group fit for employment selection. The paper develops the construct of person-group fit and offers propositions about both predictor and criterion variables associated with person-group fit. It proposes one way to assess person-group fit. Finally, it addresses several implications of using the person-group fit paradigm in human resource managementpractices.
 Keywords: Person-environment fit
Personality and personnel selection: Reexamining the impact of motivated distortion on construct validity 
 Smith, Brent  Cornell U. bs58@cornell.edu (607)255-7704 
 While there is an emerging consensus that social desirability does not meaningfully affect criterion-related validity, several researchers are, once again, arguing that response distortion in the form of social desirability degrades the construct validity of personality measures. When people are placed in a situation that provides a motive to distort (e.g., an applicant setting were the incentive to fake a test is high), they argue that construct validity inevitably "decays". Griffith (1997), Christiansen (1998), and Douglas, McDaniel and Snell (1996) have provided convincing evidence that these psychometric properties of personality and other non-cognitive measures, in fact, are diminished when people distort their responses. However, these studies employed between-subjects designs where one group of respondents (students) were instructed to respond honestly to a personality inventory and another group of respondents (also students) were instructed to "fake good". I believe that the consequences of such a manipulation are to exacerbate the effects of response distortion beyond that which would be expected under any realistic circumstance (e.g., an applicant setting), and provides us with little information regarding the actual occurrences of response distortion or the impact of that distortion for the construct validity of personality measures. The research reported here was designed to assess the impact of response distortion on the construct validity of personality measures in real-world contexts not under artificial instructional conditions. Results of two studies support the conclusion that response distortion has little impact on the construct validity of personality measures used in selection contexts.
 Keywords: personality assessment; social desirability; personnel selection
An examination of calculator use on employment tests of mathematical ability 
 Burroughs, Susan M. U. of Tennessee, Knoxville SusanMBurr@AOL.COM (423)-974-3161 
 Bing, Mark N. U. of Tennessee, Knoxville MNB343@AOL.COM (423)-909-0591 
  Handheld calculators have been used on-the-job for over 20 years and make up an integral part of many job domains, yet the degree to which these devices can impact scores on employment tests of mathematical aptitude has yet to be thoroughly examined. Selection professionals may desire to provide job candidates with these computational aids during employment testing to increase the overlap between the test and job domains, thereby increasing the content validity of their selection procedure. However, the use of such aids could alter the psychometric properties of the test, such as test reliability and validity. Using a within-subjects research design, the current study (N=167) investigated the effects of calculator use on the psychometric properties of two employment tests of mathematical ability: the Employee Aptitude Survey (EAS), which assesses computational ability, and the Personnel Test for Industry (PTI), which measures mathematical reasoning skills. A consistent pattern of increase or decrease in reliability and validity across calculator conditions was not obtained for the PTI, whereas a consistent, albeit non-significant decrease in test reliability and validity was found for the EAS under the condition of calculator use. However, by breaking the total scores of the EAS into sub-test scores representing performance on integer, decimal, and fraction sections, a significant decrease in validity under the condition of calculator use was detected. Implications for future research and practice were addressed.
 Keywords: selection; testing; psychometrics
College Grade Point Average as a Selection Device: Ethnic Group Differences and Adverse Impact of a Forgotten Predictor of Job Performance 
 Roth, Philip L. Clemson U. rothp@clemson.edu 864-656-1039 
 Bobko, Philip  Gettysburg College pbobko@gettysburg.edu 717-337-6983 
 College grade point average (GPA) can be, and is, used in a variety of ways in personnel selection. Grades can be used as a "screen" for obtaining interviews or as an item of information within interviews or biodata instruments. Unfortunately, there is little empirical research literature in human resource management that informs researchers or practitioners about ethnic group differences and adverse impact implications of using cumulative GPA for selection. Data from a medium sized university in the Southeast suggests that the standardized average Black-White difference for sophomores is .21, for juniors it is .49, and for seniors it is .78. The ethnic group difference for college seniors is not far from the d=1.00 associated with cognitive ability tests and is much larger than most other selection devices such as personality dimensions (d's of approximately .10) and structured interviews (d=.24). We also conduct adverse impact analyses at three GPA screens (3.00, 3.25, and 3.50) to demonstrate that employers (or educators) will face adverse impact at all three levels if GPA continues to be implemented as a selection device. Implications and future research are discussed.
 Keywords: GPA; Ethnic group differences; Adverse Impact