THE EFFECTS OF A JOINT CORRECTION FOR THE ATTENUATING EFFECT OF CRITERION UNRELIABILITY AND CASE 2 RESTRICTION OF RANGE ON THE VALIDITY COEFFICIENT

This paper reports the results of a portion of a more oompr(>ht'Tl siV(' study on the dfed of correction for random error of measurement in bolh the criterion and the predictor and/or various forms of restriction of range on the parameters le.g., piX. y], P[Y\X], uMXIl reqUired to spcdfy and justify a S<'lection procedure. The objooiV(' of this paper is to determine the dft'CI 01 a joint correction for criterion unreliability and Case 2 restriction of range on the validity coefficient. Results are dl-pict('(! graphically and discussed. Selection, as it is traditionally interpreted, represents a critical human rcsource intervention in any organisation in so far as it regulates the movement of employees into, through and out of the organisation. As such selection firstly represents a potentially powerful instrument through which the human resource function can add value to the organisation [Boudreau, 1983b; Boudreau &. Berger, 1985a; Cascio, 1991b; Cronshaw &. Alex..1nder, 19851. Selection, furthermore, represents a relatively visible mechanism through which access to employment opportunities are regulated. Because of this latter aspect, selection, more than any other human TCSOUrce intetvention, has been singled out for intense scrutiny from the pespective of f.limcss and affirmative action [Arvey &. Faley, 1988; Milkovich & Boudreau, 1994; Singer, 19931. Two basic criteria are implied in terms of which selection procedures need to be evaluated, namely efficiency and equity [Milkovich & Boudreau, 19941_ The quest for effident and equitable selection procedures requires periodic psychometric audits to provide the feedback needed to refine the selection procedure to greater efficiency and to prOvide the evidence required to vindicate the organisation should it be challenged in terms of anti-discriminatot)' legislation. The empirical evidence needed to meet the aforementioned burden of persuasion is based on a simulation of the actual selection procedure on a sample taken from the applicant population. According to the Guidelines for the validation and use of personnel selection procedures [Society for Industrial Psychology, 1992[, the Principles for the validation and use of personnel selection procedures [Society for Industrial and Organisational ['sycholo~,'y, 1987J and the Kleiman and Faley [1985J re\~e\V of selection litigation. such a psychometriC audit of a selection procedure would require the human resource function to demonstrate that: ,. the selection procedure has its foundation in a scientifically credible performance theot)'; ,. the selection procedure constitutes a business necessity; and ,. the manner in which the selection strategy combines applicant information can be considen..>d fair. The empirical evidence needed to meet the aforementioned burden of persuasion is acquired through a simulation of the actual selection procedure on a sample taken from the applicant population. Internal and external validity constitute two criteria in terms of which the credibility of the evidence produced by such a simulation would be evaluated. The following two crucial questions are thereby indicated: ,. to what extent can the researcher be confident that the 32 research evidence produced by the selection simulation corroborates the latent structure/nomological network postulated by the research hypothesis within the limits sct by the specific conditions characterising the simulation?; and ,. to what extent can the researcher be confident that the conclusions reached on the basis of the simulation win generalise or transport to the arca of actual application? The conditions under which selection procedures are typically simulated and those prevailing at the eventual use of a selectIOn procedure normally differ to a sufficient extent to challenge the transportability of the validation research evidence. Newrtheless, given the applied nature of selection validation research, an attempt at generalis.1tion is unavoidable. According to Stanley and Campbell [1 963J external validity is thrcat(>ned by the potential specificity of the demonstrated effect of the independent variable/sf on partirular features of the research design not shared by the area of application. In sele<:tion validation research the effect of the /compositeJ independent variable on the criterion is captured by the validity cocl'ficient. The area of application is characterised by a sample of actual applicants drawn from the applicant population and measured on a ballet)' of fallible predictors with the aim of "estimating their actual contribution to the organisation [i.e. ultimille criterion scoresl and not an indicator of it attenuatl''<I by measurement error" [Campbell, 1991, p. 694J. The estimate is derived from a weighted linear composite of predictors derivl.'<I from a representa tive sample of the actual applicant populil!ion. The question regarding external validity, in the context of selection validation research, essentially represents an inquiry into the unbiasedness of the parametric validity coefficient estimate [i.e. the sample statis ticl obtained through the validation study. The parameter of interest is the corrcLltion coefficient obtained when the sample weights derived from J representative sample of subjects are applied to the applicant population and the weighted composite score is correlated with the criterion, unaltenuated by measurement error. in the population [Campbell, 1991J. The preceding discussion dearly identifies the term "applicant population" to be of central importance should a sufficiently precise depiction of the arca of actual application be desired. TI1e term "applicant population". however, even if defined as the population to which a seledion procedure lvill be ilpplicd, still has an annoying imprcdscn~s to it. A more unambiguous definition of the teffi1s howe\'er. depends on how the selection procedure is positioned relatiV(' to any selection requirements already in use [i_e. whether it THE EFFECTS O F A JO INT COR RECTION FO R TH E ATTENUATING EFFECT OF CRITERION UNRELIABILITY 33 "'QuId replace, follow on, or be integrated \vilh current selcdion requirementsJ. This issue, moreover, is linked to the question regarding the appropriate decision alternative with which to compare the envisaged selection procedure when examining its strategic merit. In the context of selection validation research, given the aforementioned depiction of the area of application, the following specific threats to external validity can be identified [Campbell, 1991; Lord & Novick, 1968; Tabachnick & FidelL 1989J: .. the extent to which the actual or operationalised criterion contains random error of measurement; .. the extent to which the actual or operationalised criterion is systematically biased; i.e. the extent to which the actual criterion is deficient and/or contaminated [Blum & Naylor, 1968); .. the extent to which the validation sample is an unrepresentative, biased, sample from the applicant population in terms of homogeneity and specific attributes [e.g. motiva · tion, knowledge/experience]; .. the extent to which the sample size and the ratio of sample size to number of predictors allow capitalisation on chance and thus overfilling of the data. The conditions listed as threats all affect the validity coefficient [Campbell, 1991; Crocker & Algina, 1986; Dobson, 1988; Hakstian, Schroeder & Rogers, 1988; lord & Novick, 1%8; Mendoza & Mumford, 1987; Messick, 1989; Olsen & Becker, 1983; Schepers, 1996], some consistently exerting upward pressure, others downward pressure and for some the direction of innuence varies. It thus follows that, to the extent that the aforementioned threats operate in the validation study but do not apply to the actual area of application, the obtained validity coefficient cannot, without fonnal consideration of these threats, be generalised to the actual area of application. Thus, the obtained validity coefficient cannot, \vithout appropriate corrections, be considered an unbiased estimate of the actual validity coefficient of interest. Statistical corrections to the validity coefficient arc generally available to estimate the validity coefficient that would have been achieved had it been calculated under the condition that characterise that area of actual application [Gulliksen, 1950; Pearson, 1903; Thorndike, 1949J , Campbell [1991, p. 701] consequently recommends that: " If thl' point of ('('ntT;11 intl'ft'5t is thl' "~1id,ty of a spOOfic sel«tion pron'dure r.:. pr<"dicling perfumtancc 0YI!1" ' rel,I"'e1y long 11M(' period for 1""" populalton of. job~pptic~nts to follow, lMn it IS no=ssary 10rom'Cl for restriction of. r~ngt', mtl'rion unreli.:lblbry, and the lining of CIl'OI' by d,/h:nonli.:ll prt'dictor weights. No to do so 15 to Introduce COnsIderable bias inlo 1M eslH""'tiorl process. N The remainder of the argument in terms of which a selection procedure is developed and justified could, however, also be biased by any discrepancy bel\veen the conditions under which the selection procedure is simulated and those prevailing during the actual use of the selection procedure. Relatively little concern, however, seems to exist for the transportability of the decision function derived from the selection simulation and descriptions/assessments of selection decision utility and fairness. This seems to be a somewhat strange state of affairs. The external validity problems of validation designs arc reasonably well documented [Barrett, Phillips & Alexander, 1981; Cook, Campbell & Peracchio, 1992; Guion & Cranny, 1982; Sussman & Roberson, 1986J. It is therefore not as if the psychometriC literature is unaware of the problem of generalising validation study research findings to the ultimate area of application. The decision function is probably the pivol of the selection procedure because it firstly captures the underlying perfonnance theory, but more importantly from a practical perspective, because it g

A BSTRACT This paper reports the results of a portion of a more oompr(>ht'Tl siV(' study on the dfed of correction for rand om error of measurement in bolh the criterion and the predictor and/or various forms of restriction of range on the parameters le.g., piX.y], P [Y\X], uMXIl reqUired to spcdfy and justify a S<'lection procedure.The objooiV(' of this paper is to determine the dft'CI 01 a joint correction for criterion unreliability and Case 2 restriction of range on the validity coefficient.Results are dl-pict('(!graphically and discussed. Selection, as it is traditionally interpreted, represents a critical human rcsource intervention in any organisation in so far as it regulates the movement of employees in to, through and out of the organisation.As such selection firstly represents a potentially powerful instrument through which the human resource function can add value to the organisation [Boudreau, 1983b;Boudreau &. Berger, 1985a;Cascio, 1991b;Cronshaw &. Alex..1nder, 19851.Selection, furthermore, represents a relatively visible mechanism through which access to employment opportunities are regulated.Because of this latter aspect, selection, more than any other human TCSOUrce intetvention, has been singled out for intense scrutiny from the pespective of f.limcss and affirmative action [Arvey &. Faley, 1988;Milkovich & Boudreau, 1994;Singer, 19931.Two basic criteria are implied in terms of which selection procedures need to be evaluated, namely efficiency and equity [Milkovich & Boudreau, 19941_ The quest for effident and equitable selection procedures requires periodic psychometric audits to provide the feedback needed to refine the selection procedure to greater efficiency and to prOvide the evidence required to vindica te the organisation should it be challenged in terms of anti-discriminatot)' legislation.The empirical evidence needed to meet the aforementioned burden of persuasion is based on a simulation of the actual selection procedure on a sample taken from the applicant population.According to the Guidelines for the validation and use of personnel selection procedures [Society for Industrial Psychology, 1992[, the Principles for the validation and use of personnel selection procedures [Society for Industrial and Organisational ['sycholo~,'y, 1987J and the Kleiman and  Faley [1985J re\~e\V of selection litigation.such a psychometriC audit of a selection procedure would require the human resource function to demonstrate that: ,. the selection procedure has its foundation in a scientifically credible performance theot)'; ,. the selection procedure constitutes a business necessity; and ,. the manner in which the selection strategy combines applicant information can be considen..>d fair.
The empirical evidence needed to meet the aforementioned burden o f persuasion is acquired through a simulation of the actual selection procedure on a sample taken from the applicant population.Internal and external validity constitute two criteria in terms of which the credibility of the evidence produced by such a simulation would be evaluated.The following two crucial questions are thereby indicated: ,. to what extent ca n the researcher be confident that the research evidence produced by the selection simulation corroborates the latent structure/nomological network postulated by the resea rch hypothesis within the limits sct by the specific conditions characterising the simulation?; and ,. to what extent can the researcher be confident that the conclusions reached on the basis of the simulation win generalise or transport to the arca of actual application?
The conditions under which selection procedures are typically simulated and those prevailing at the eventual use of a selectIOn procedure normally differ to a sufficient extent to challenge the transportability of the validation research evidence.Newrtheless, given the applied nature of selection validation research, an attempt at generalis.1tion is unavoidable.According to Stanley and Campbell [1 963J external validity is thrcat(>ned by the potential specificity of the demonstrated effect of the independent variable/sf on partirular features of the research design not shared by the area of application.In sele<:tion validation research the effect of the /compositeJ independent variable on the criterion is captured by the validity cocl'ficient.The area of application is characterised by a sample of actual applicants drawn from the applicant population and measured on a ballet)' of fallible predictors with the aim of "estimating their actual contribution to the organisation [i.e.ultimille criterion scoresl and not an indicator of it attenuatl''<I by measurement error" [Campbell, 1991, p. 694J.The estimate is derived from a weighted linear composite of predictors derivl.'<Ifrom a representative sample of the actual applicant populil!ion.The question regarding external validity, in the context of selection validation research, essentially represents an inquiry into the unbiasedness of the parametric validity coefficient estimate [i.e. the sample statis ticl obtained through the validation study.The parameter of interest is the corrcLltion coefficient obtained when the sample weights derived from J representative sample of subjects are applied to the applicant population and the weighted composite score is correlated with the criterion, unaltenuated by measurement error. in the population [Campbell, 1991J.The preceding discussion dearly identifies the term "applicant population" to be of central importance should a sufficiently precise depiction of the arca of actual application be desired.TI1e term "applicant population".however, even if defined as the population to which a seledion procedure lvill be ilpplicd, still has an annoying imprcdscn~s to it.A more unambiguous definition of the teffi1s howe\'er.depends on how the selection procedure is positioned relatiV(' to any selection requirements already in use [i_e.whether it "'QuId replace, follow on, or be integrated \vilh current selcdion requirementsJ.This issue, moreover, is linked to the question regarding the appropriate decision alternative with which to compare the envisaged selection procedure when examining its strategic merit. In the context of selection validation research, given the aforementioned depiction of the area of application, the following specific threats to external validity can be identified [Campbell, 1991;Lord & Novick, 1968;Tabachnick & FidelL 1989J: .. the extent to which the actual or operationalised criterion contains random error of measurement; .. the extent to which the actual or operationalised criterion is systematically biased; i.e. the extent to which the actual criterion is deficient and/or contaminated [Blum & Naylor, 1968); .. the extent to which the validation sample is an unrepresentative, biased, sample from the applicant population in terms of homogeneity and specific attributes [e.g.motiva • tion, knowledge/experience]; .. the extent to which the sample size and the ratio of sample size to number of predictors allow capitalisation on chance and thus overfilling of the data.
The conditions listed as threats all affect the validity coefficient [Campbell, 1991; Crocker & Algina, 1986; Dobson, 1988;  Hakstian, Schroeder & Rogers, 1988; lord & Novick, 1%8;  Mendoza & Mumford, 1987; Messick, 1989; Olsen & Becker,  1983; Schepers, 1996], some consistently exerting upward pressure, others downward pressure and for some the direction of innuence varies.It thus follows that, to the extent that the aforementioned threats operate in the validation study but do not apply to the actual area of application, the obtained validity coefficient cannot, without fonnal consideration of these threats, be generalised to the actual area of application.Thus, the obtained validity coefficient cannot, \vithout appropriate corrections, be considered an unbiased estimate of the actual validity coefficient of interest.
Statistical corrections to the validity coefficient arc generally available to estimate the validity coefficient that would have been achieved had it been calculated under the condition that characterise that area of actual application [Gulliksen, 1950;Pearson, 1903;Thorndike, 1949J , Campbell [1991, p. 701] consequently recommends that: " If thl' point of ('('ntT;11 intl'ft'5t is thl' "~1id,ty of a spOOfic sel«tion pron'dure r.:.pr<"dicling perfumtancc 0YI!1" ' rel,I"'e1y long 11M(' period for The remainder of the argument in terms of which a selection procedure is developed and justified could, however, also be biased by any discrepancy bel\veen the conditions under which the selection procedure is simulated and those prevailing during the actual use of the selection procedure.
Relatively little concern, however, seems to exist for the transportability of the decision function derived from the selection simulation and descriptions/assessments of selection decision utility and fairness.This seems to be a somewhat strange state of affairs.Basically the same logic also applies to the evaluation of the decision rule in terms of selection utility and fairness.
Correcting only the validity coefficient would leave the "bottom-line" evaluation of the selection procedure unaltered.
Restricting the statistical corrections to the validity coefficient baSically means that practically speaking nothing really changes.

RESEA RCH O BJECTIVES
The general objective of the research reported here is to firstly detennine whether specific discrepancies bel\\'een the conditions under which the selection procedure is simulated and those prevailing during the actual use of the selection procedure produces bias in estimat('S required to specify and justify the procedure.If bias is found the objective, furthermore, is to delineate l1ppropriate statistical corrections of the validity coefficient, the decision rule and the descriptions/ assessments of selection decision utility and fairness, required to align the contexts of evaluation/validation and application. The general objective of the research reported here is, finally, to detennine whether the corrections should be applied in validation research.With reference to this latter aspect the following argument is pursued.The evaluation of any personnel intelVCntion in essence constitutes a process where infonnation is obtained and analyscdlprocessed at a cost lvith the purpose of making a decision [i.e.chOOSing between I\vo or more treatmentsJ which results in outcomes with a certain value to the decision maker.To add additional infonnation to the evaluation/decision process and/or to extend the analyses of information could be considered rational if it results in an increase in the value of the outcomes at a cost lower than the increase in value.The foregOing argument thus implies that corrections applied to the obtained correlation coefficient are rational to the extent that [Boudreau, 1991): .. the corrections change decisions concerning: o the validity of the research hypothesiS [or at least the a priori probability of rejecting ~ assuming ~ to be falsel; and/or o the choice of which applicants to select; and/or o the appropriate selection strategy option; and/or o the fairness of a particular selection strategy... the change in decisions have significant consequences; and .. the cost of applying the statistical corrections arc low.
The argument is thus by implications that there is little merit in applying statistical corrections should they not change any part of the total case built by the validation research team in defense of the selection procedure even if the corrections should rectify systematic bias in the obtained estimates.
To cover all of the aforementioned in a single article would, however, constitute a somewhat overly ambitious endeavor.This paper consequently restricts itself to the more modest objective of detennining the effect of a joint correction for criterion unreliability and Case 2 restriction of range on the validity coefficient.Case 2 restriction of range refers to the situation were selection occurred [directly/explicitly) on the predictor [or the criterionl through complete truncation on X at Xc [or on Y at YoJ and both restricted and unrestricted variances are known only for the explicit selection variable X [or YJ.
An appropriate notational system is needed to pursue this objective.The conventional Greek symbols \vill be used to represent population parameters: 0 2 for variance, f.{ for mean, p for correlation.Parameters will cany suitable subscripts to identify the variables involved.The following notation will be used; 0 2[X], f.{[X], pIX YI and PIX YJ.Although considerable literature exists regarding the correction of correlation coefficients for the separate attenuating effects of error of measurement and restriction of range [Pearson, 1903, Gullikscn, 1950, Ghiselli, Campbell & Zedeck. 1981;Held & Foley, 1994;Linn, 1983;Olson & Becker, 1983; Rec, Carretta, Earles &.Albert, 1994) relatively less attention has been given to the theory underlying the correcti on of a correlation coefficient for the joint effects of error of measurement and restriction of range [Bobko, 1983;Lee, Miller &. Graham, 1982;Mendoza & Mumford, 1987;Schmidt.Hunter &.Urry, 1976).
In a typical validation study, restriction of range and criterion unreliability are simultaneously present.Their effects combine to yield an attenuated validity coeffident that could severely underestimate the operational validity [Lee, Miller &. Graham, 1982;Schmidt.Hunler & Urry, 19761.It thus S('('ms to make intuitive sense to double correct an obtained validity cocffident for the attenuating effect of both factors.The APA, however, through their Sta ndards for Educational and Psyc hological Tests [APA, 1974, p. 41), initially recommended that: " It is ordinarily unwis.-to"",k~ s.equenhal (QI"Tedions, as in applying a <"Om 'chon to a coefficient alre.)(!y <VrreC1ed IcK restriction of rang<".Chains of <VrreC1ions may Ix-useful in ronskIering possible limher research.but their results 5houJd TIOI be s.eriously n>pOrtl'd as estimates of population correlation coefficients." Schmidt.Hunter and Urry [1976/, though, consider the APA recommendation to be in error and propose that the obtained validity coefficient should be sequentially corrected for the C(fects of both restriction of range and cri terion unreliability so as to obtain an estimate of the actual operational validity.The revised edition of the Standards for Educational and Psychological Tests lAPA, 1985J subsequently also seems to have softened its position on this topic by absta ining from any comment.The stepwise correction procedure suggested by Schmidt.Hunter and Urry [19761 involves first correcting both the obtained validity and reliability coeffidents for restriction of range si nce both cocffidents apply only to a restricted applicant group and thus arc to a greater or lesser extenl negatively biased estimates of the operational reliability and va lidity coeffidents.
Equalion 3 is suggested (Feldt &. Brennan, 1989;Ghiselli, Campbell &. Zedeck. 1981J as an appropriate correction formula to correct the reliability coefficient for the attenuating effect of range restriction if homogeneity of error variance across the range of true criterion scores can be assumed (I.e. the assumption is that applicants were selected in such a manner that the true score variance is reduced whereas the error variance remains unaffected]; Guion, 1965;Gulliksen, 1950;Lee, Miller & Graham, 1982J .From the assumption of homogeneous error variance across the range of true criterion scores it follows that: 3 The assumption that Equation 3 is based on, however, freque ntly does not hold [Feldt &. Brennan, 1989J.A further problem with Equation 3 in the context of validation research, moreover, is that the cri terion variance for the unrestricted group is logically impossible to obtain.

•
Depending on the nalure of Ihe selection/restriction of range and the variable for which both Ihe restricted and unrestricted variance is known, the correction of the validity coefficient for th e at tenuating effect of restriction of range will proceed through the appropriate correction formula .The validity coefficient corrected for restriction of range will then subsequently be corrected for the attenuation effect of criterion unreliability by employing the results of the preceding first two steps li.e. the reliability and validity coefficients corrected for restriction of rangeJ in the traditional attenuation correction fonnula for the criterion only.Lee, Miller and Graham [1982J, however, point  ~ range restriction directly on the predictor and unreliability in the predictor and the criterion; or .. range restriction directly on the latent trait measured by the predictor and unreliability in the predictor and the crite rion.
Equation 13 shows the appropriate correction fonnula applicable when range restriction occurs directly on the abilityllatent trait measured by the predictor (Mendoza & Mumford, 1987] .
The derivation of Equation 13 assumes a linear, homosceclastic regression of the criterion Y on the predictor X in the unrestricted population and in addition makes the two usual restriction of range assumptions, namely that: .. the regresSion of actual job perforamance [I.e. the ultimate criterionJ Y' on ability will not be affccted by explicit selection on the latent trai t represented by X; and ~ the ultimate criteri on varia nce conditional on X' will not be al tered by explicit selection on the latent trait measured by X IMendoza &: Mumforcl1987).
From the assumption that the regression of actual job performance li.e. the ultimate criterionJ Y' on ability will not be affccted by explicit selection on the latent trait represented by X. it follows that: 6 From the assump tion tha t the ultimate criterion variance conditional on X' will not be altered by explicit selection on the la tent tra it measured by X, it follows that: Equation 13 places ra ther formidable demands on the analyst in as far as it requires the reliabili ty and variance of both variables in both the restricted and unrestricted groups to be known.This seems to limit the practical value of Equation 13.If it is possible to calculate both ~[X] and afY] [and not only one of the twol.il seems more than probable that one would also be able to calculate piX.Yj, p,tX and PItY and thus estimate prr .. lYI with the traditio nal attenuation correction formu la [Equation 12J.The need to infer plT .. lYI indirectly via an equation like Equation 13, would then no lo nger exists.Mendoza and Mumford 11987J acknowledge the equation's requirement tha t the reliability of both measures be kn own in the restricted and unrestricted space, but do not regard this as a problem since the restricted and unrestricted reliabilities arc rela ted by Equation 3.
F.quation 30 applies to the second, probably more prevalent, situation where restriction of range/selection occurs directly on the predictor [Mendoza & Mumford, 1987J .The derivation of Equation 30 assumes a linear, homoscedastic regression of the criterio n Y on the predictor X in the unrestricted population and in addition makes the two usual restriction of range assumptions, namely that: ~ the regression of the criterion Yon the predictor will not be affected by explicit selection on the predictor X; and ~ the crite rion variance conditional on X will not be alte red by explici t selection on X IMendoza & Mumford, 1987J.
From the assumption that the regression of the criterion Y on the pred ictor wil! not be affected by explicit selection on the predictor X. it follows that: ........... 14 From the assumption that the criterion variance conditional on X will not be altered by explicit selection on the predictor X. it follows that: From Equation 15 it follows that: Isolating the term p2fX.YJ in Equation 16: .. .... .. 17 .29 Substituting Equation 29in Equation 27 and taking the square root: Equation 30, however, still has rather limited utility in applied validation research_ Its primary deficiency lies in the fact that it also corrects the correlation coefficient for the unreliability of predictor variables.Correcting for unreliability in the predictor in a validation context is misleading.It would be of relatively little va lue to know the validity of a pcrfc<:tly reliable predictor when such an infa!liblc measuring instrument can neveT be availa ble for operational usc [Lee, Miller & Graham, 1982;Nunnally, 1978  Figurr J: The reaction of the double oorrccted oorrelation to ("hanges in p[x,y), PI,,; K '" 3.
... lhe findings reported here clearly indicates the dramatic consequence of correcting the observed validity coefficient for the attenuating effect of both restriction of range and criterion unreli .. bility, especially when severe range restriction occurred and the criterion measures suffer from low reliability.Not to correct the observed validity coefficient will severly underestimate the .. ctual validity of the selection procedure for the applic .. nt population.Lee, Miller andGraham [1982], andHobko [1983] concur that all the .. vailable evidence argue in favor of jointly correcting the validity coefficient for the attenuating effect of both range restriction and the unreliability of the criterion.Lee, Miller and Graham [1982] found most corrected validity coefficients to be slight overestimates of the true validity coefficient.In direct contrast to the findings reported by Lee, Miller andGrah .. m [1982J, Bobko [1983] concludes that, on average, the double corrected validity coefficient will still underestimate the operational validity coefficient.The research reported here docs not permit .. ny comment on bi .. s in the corrected validity coefficient.
A further, less serious, limit .. tion of both Equations 32 and 30 concerns the premise that selection can only occur dir&tly on the predictor.Case C conditions [indirect restriction of range on the predictor and the criterion through direct selection on .. third variable] probably constitute the predominant environment in which restriction of range corrections are required.Again, however, th is problem can relatively easily be rectified by substituting the C .. se 2 restriction of range correction fonnula in the derivation of Equation 30and Equation 32 ~vith the appropriate Case C correction fonnula [Gulliksen, 1950;Thorndike, 1949].
Capital letters are used to denote random variables.Let X and Y denote the observed scores on the predictor and criterion respectively.Let T", T)' and E" and Ey denote the true and error score components of the [unrestrictedJ observed predictor and criterion scores.The true and error score components of the restricted observed predictor and criterion scores will be denoted by corresponding lowercase lellers.Let the to be corrected correlation coefficient calculated for the restricted group be indicated as p[x.y] and the 10 be estimated correlation coefficient as piX.YJ.Let 0 2(xJ and cr 2 [yJ rcpIl'S('nts the calculated (i.e.known] variances for the restricted group and (J2(XJ and (iM the variances for the unrestricted group of which only 0 2/XJ is known.The capital lettcr E will be reserved for usc as the expected value.The reliability coefficients for the unrestricted cri terion and predictor measurements will be denoted as Ptty and Pn, respectively.THE CORR ECTION O F A CORRELATION COEFFI-CiENT FOR THE JOINT EFFECTS OF ERROR OF MEASUREMENT AND RESTRICTION OF RANGE a[yJ J(1 -PtI)=a[Y] J(l -PitY) 1 Squaring Equation 1 and then multiplying by Ha 2 [Y]