Abstract
Orientation: Assessment Centres (ACs) are used globally for the selection and development of candidates. Limited empirical evidence exists of the ethical challenges encountered in the use of ACs, especially in South Africa (SA).
Research purpose: Firstly, to explore possible ethical challenges related to ACs in SA from the vantage point of the practitioner and, secondly, to search for possible solutions to these.
Motivation for the study: Decisions based on AC outcomes have profound implications for participants and organisations, and it is essential to understand potential ethical challenges to minimise these, specifically in the SA context, given its socio-political history, multiculturalism, diversity and pertinent legal considerations.
Research design, approach and method: A qualitative, interpretative research design was chosen. Data were collected by means of a semi-structured survey that was completed by 96 AC practitioners who attended an AC conference. Content analysis and thematic interpretation were used to make sense of the data. The preliminary findings were assessed by a focus group of purposively selected subject-matter experts (n = 16) who provided informed insights, which were incorporated into the final findings. The focus group suggested ways in which specific ethical challenges may be addressed.
Main findings: The findings revealed many ethical challenges that can be better understood within a broad framework encompassing 10 themes: Universal ethical values; multicultural global contexts; the regulatory-legal framework for ACs in SA; characteristics of the assessor; psychometric properties of the AC; characteristics of the participant; bias and prejudice; governance of the AC process; ethical culture of the employer organisation and the evasive nature of ethics as a concept.
Practical and managerial implications: Considerable risk exists for the unethical use of ACs. An awareness of possible areas of risk may assist AC stakeholders in their search for ethical AC use.
Contribution or value-add: The study may contribute to an evidence-based understanding of the ethical aspects of ACs. The recommendations may also benefit all AC stakeholders who wish to use ACs ethically.
Introduction
In today’s ever-changing and highly competitive world of work, the success or failure of any business strongly depends on the calibre of its personnel (Aguinis, Joo & Gottfredson, 2011; Wheatley, 2009). The downfall of an organisation is often caused by employee incompetence (Elias, 2013), person–job mismatch (Spence Laschinger, Wong & Greco, 2006) or unethical conduct (Rossouw & Van Vuuren, 2014). It is for this reason that organisations are prepared to invest in specialised methods to select and develop employees (Foxcroft & Roodt, 2013). Such methods include psychological tests and other means of assessment, including assessment centres (ACs), which are believed to play an important role in the measuring and prediction of human behaviour and performance (Hermelin, Lievens & Robertson, 2007).
An AC is defined as a process of assessment aimed at identifying the strengths and weaknesses of candidates or employees to aid decision-making regarding their selection or development (Thornton, 1992). The process includes the use of a broad range of assessment techniques in a multiple methods, multiple traits and multiple assessors set-up with the purpose of assessing and/or measuring a range of attributes and competencies (Moerdyk, 2009; Schlebusch & Roodt, 2008). Assessments are pooled between assessors to arrive at an overall AC rating, which is used to aid decision-making (Hagan, Konopaske, Bernardin & Tyler, 2006; International Taskforce on Assessment Center Guidelines, 2015). Assessment techniques would essentially consist of comprehensive and standardised procedures, including job simulations and situational exercises (Jackson, Stillman & Englert, 2010; Schlebusch & Roodt, 2008; Thornton, Rupp & Hoffman, 2015).
According to Schlebusch and Roodt (2008), ACs form part of the general assessment family but differ from traditional assessment tests in the following ways: (1) AC methods and decisions are mostly based on overt or visible behaviours, whereas other assessments focus on covert or latent aspects; (2) ACs use multiple techniques and multiple assessors, which is not necessarily the case with other assessment methods; (3) ACs may be administered, scored and interpreted by qualified behavioural specialists (such as psychologists) as well as by other trained individuals – which is not permissible in respect of psychometric tests within the South African (SA) regulatory-legal framework; and (4) ACs are context-specific in that simulations and exercises are often similar, or closely linked to the actual job in question. The Guidelines for Best Practice Use of the Assessment Centre Method in South Africa (5th edition) compiled by the Assessment Centre Study Group Taskforce on Assessment Centres in South Africa (2015) state that each AC method (irrespective of type) should have at least the following features to be classified as an AC: job analysis to define competencies and simulations; behaviour classification to operationalise these competencies; appropriate assessment techniques; multiple assessments; work simulations; multiple assessors; comprehensive assessor training; recorded behaviours; data integration; and comprehensive report writing.
The popularity of ACs is attributable to a number of demonstrated strengths. These include high reliability and face, content and predictive validity, especially for selection, training and promotion (Bergh & Theron, 2009; Brits, Meiring & Becker, 2013; Kriek, 1991; Moerdyk, 2009; Thornton & Gibbons, 2009); little adverse impact (Thornton et al., 2015); and high fidelity scores because of the job-related nature of ACs (Schollaert & Lievens, 2011). ACs are competency-based, job-related, organisationally focused and industry specific. They are conducted by trained facilitators, may include line managers to enrich decision-making and are mostly experienced positively by participants (Moerdyk, 2009). It is not surprising, therefore, that the use of ACs is consistently on the rise (Brits et al., 2013; Krause, Rossberger, Dowdeswell, Venter & Joubert, 2011; Müller & Roodt, 2013) – to the extent that SA is now believed to be the third largest user of ACs among the 82 countries where ACs are used (Mulder & Taylor, 2015).
Despite their utility and widespread use, ACs are tainted by inconsistent and sometimes contradictory construct validity evidence (Jackson, Michaelides, Dewberry & Kim, 2016; Meriac, Hoffman, Woehr & Fleisher, 2008; Pietersen, 1992). Furthermore, current findings indicate that approximately one third of AC content used in SA specifically was developed overseas (Krause et al., 2011). This practice raises questions about the relevance and cultural appropriateness of AC content in a specific context, especially with regard to validity issues. It is clear from the above discussion that ethical issues may well arise in the use of ACs generally, supporting a need for research in this field.
Regulatory-legal framework for the use of assessment centres in South Africa
Any form of psychological assessment in SA is controlled within an extensive regulatory-legal framework. The Health Professions Act (Health Professions Act, No. 56 of 1974) regulates the conduct of all health professionals through the Health Professions Council of South Africa (HPCSA). The Board for Psychology on its part prescribes general rules and ethical guidelines that are binding upon all psychologists (Health Professions Act, No. 56 of 1974, Notice R717 of 2006). These general rules and guidelines refer to overarching ethical principles, such as respect for human rights and the notion of ‘creating no harm’. Although the use of psychological assessment activities is addressed in considerable detail, the use of ACs is not specifically addressed. However, overarching principles, such as the need for reliability, validity and fairness are implied throughout.
Apart from the above mentioned specific regulatory-legal acts and regulations, many other acts have an implicit bearing on the use of ACs in SA. Among these are the following: (1) Constitution of the Republic of South Africa (Constitution of the Republic of South Africa, Act No. 108 of 1996), which entrenches the values of equity and equal treatment of all citizens, (2) Labour Relations Act (Labour Relations Act, No. 66 of 1995), which embodies the notion of discrimination – either in the form of intentional adverse treatment or in the form of unintentional adverse impact by an employer on members of a particular group; (3 and 4) Protection of Personal Information Act (Protection of Personal Information Act, No. 4 of 2013) and the Promotion of Access to Information Act (Promotion of Access to Information Act, No. 2 of 2000), which deals with the right to privacy, the right to information and the unlawful sharing of information and (5) Consumer Protection Act (Consumer Protection Act, No. 68 of 2008), which has implicit implications for the contractual relationship between AC practitioners, participants and clients.
Of further interest is the Employment Equity Act (EEA) (Employment Equity Act, No. 55 of 1998), which aims to achieve equity in the SA workplace by promoting equal opportunities and fair treatment in employment. Section 8 of the act contained a specific reference to psychological testing and other forms of assessment and determined that:
psychological testing and other forms of assessment of an employee was prohibited unless the assessment or test that is being used: (a) has been scientifically shown to be valid and reliable, (b) can be applied fairly to all employees and (c) is not biased against any employee or group. (Employment Equity Act, No. 55 of 1998, Section 8)
It was generally accepted that an AC could be classified as ‘other forms of assessment’ and would, therefore, fall within the ambit of the Act.
A recently amended version of the EEA (Employment Equity Act, No 55 of 1998, as amended by Act No. 47 of 2013) created complications for interpretation by determining that:
psychological testing and other similar assessments are prohibited unless the test or assessment being used: (a) has been scientifically shown to be valid and reliable; (b) can be applied fairly to all employees; (c) is not biased against any employee or group; and (d) has been certified by the Health Professions Council of South Africa, as established by section 2 of the Health Professions Act (Health Profession Act, No. 56 of 1974, as amended by Act 29 of 2007), or any other body which may be authorised by law to certify those tests or assessments.
This amended version of the Act appeared to imply a narrower definition of psychological and other assessments because of the specific reference to the HPCSA or any other body that may be authorised by law to certify those tests or assessments. This amended version seemingly included psychometric tests (which were previously classified by the HPCSA) as well as other forms of assessments such as ACs, which were never explicitly regulated in any form by the HPCSA or any other body. This interpretation led to serious confusion among all AC stakeholders. The matter was further complicated by the perceived inability of the HPCSA to practically regulate the certification of psychological tests because neither a conceptual framework nor efficient operational structures exist to facilitate the process.
These matters recently served before the courts and it was determined that ‘the proclamation in terms of which Section 8(d) of the EE Act was brought into operation was null and void and of no force and effect’ (Test Publishers v. President of RSA and others, 2017), implying that Section 8 of the original act was to be retained, by determining that:
psychological testing and other forms of assessment of an employee was prohibited unless the assessment or test that is being used (a) has been scientifically shown to be valid and reliable, (b) can be applied fairly to all employees and (c) is not biased against any employee or group.
In essence, the implications were that ACs (as a form of assessment of an employee) in SA should comply with the universal testing principles of reliability, validity, fairness and absence of bias against any individual or group – as was always the case.
Apart from the above complexities, the regulatory-legal environment for the use of ACs in SA is however further complicated by the specific requirements regarding validity in both the original and the amended versions of the EEA. The Act places upon practitioners an unrealistic measure with which they have to comply, namely that of validity. Validity in the Act is conceptualised as a unidimensional construct that produces a single dichotomous score that renders a test/measure either valid or invalid. Contemporary validity theories and empirical evidence, on the contrary, conceptualise validity as a concept that is assessed on a continuum of evidentiary support and validation as a concept that implies an ongoing process within different contexts over time (Cascio, 1978; Coetzee & Schreuder, 2016; Muchinsky, 2011; Muchinsky, Kriek & Schreuder, 2005). Validity is therefore not represented as a single score value, but as the result of scientifically based judgements across multiple dimensions of validity, including theoretical (construct), content, criterion-related, convergent, discriminant, concurrent and external/ecological validity (Coetzee & Schreuder, 2016; Moerdyk, 2009).
From the above, it is clear that the regulatory-legal environment in SA does not provide explicit guidance to ensure sound and ethical AC use. Because the regulatory-legal framework is not clear, it can be assumed that practitioners need to be ethically aware and act beyond the limits of the law, to embody the spirit captured in all the separate acts by upholding universal ethical values such as equality, equal treatment, prevention of adverse impact, dignity, respect and ‘creating no harm’.
Practice-informed guidelines to assist assessment centre practitioners
Beyond the imperative to comply with regulations and laws, from a professional and ethical point of view, a need exists – both nationally and internationally – for practice-informed guidelines to guide practitioners ‘to do the right thing’. Many stakeholder organisations, including national and international professional bodies, have over many years compiled and maintained explicit guidelines for the use of ACs. Internationally, many guidelines exist, for example, The Design and Delivery of Assessment Centres (British Psychological Society, 2015) and the Guidelines and Ethical Considerations for Assessment Center Operations (6th edition) (International Taskforce on Assessment Center Guidelines, 2015). These guidelines address aspects such as assessor training, validation issues and technology. They also include a section on ethical, legal and social responsibility issues, such as informed participation, data security and unfair discrimination.
Of particular importance in SA are the Guidelines for Best Practice Use of the Assessment Centre Method in South Africa (Assessment Centre Study Group Taskforce on Assessment Centres in South Africa, 2015). The ACSG, an organisation of interested parties, came into being in 1981 with the purpose of reflecting on and guiding and monitoring the use of the AC method in SA. The ACSG guidelines are closely aligned with the above mentioned International Taskforce on Assessment Center Guidelines (2015) and similar in many respects (Meiring & Buckett, 2016). The ethics sections deal specifically with aspects including informed consent, participant rights, re-assessment, disabilities, copyright, AC integrity, the portrayal of an AC delivering results it was not designed to deliver, using AC results for purposes other than those they were intended for, using ACs across different contexts, repeated exposure, assessors who know participants, compromising professional conduct and social responsibility. Cross-cultural considerations are of particular importance and are addressed separately. These guidelines can only be of value if people are aware of them and internalise the underlying intentions (Botha, 2016).
Such guidelines can, however, never replace the mandate of a statutory body. The implication thereof is that compliance with these guidelines is voluntary rather than compulsory, again supporting the need for an ethical awareness among practitioners. The emphasis should not be on compliance with the minimum legal requirements, but on procedures and decisions that are fair and justifiable (Coetzee & Schreuder, 2016). This requires, above all, an ethical awareness and ethical competence that guide actions beyond regulations and prescriptions (Rossouw & Van Vuuren, 2014). The ethical awareness and competence should perhaps be both aspirational and directional (Meiring, Schlebusch & Lowman, 2016).
Problem statement
Because the regulatory-legal environment does not provide explicit guidelines for the use of ACs in SA, self-regulation is essential. However, if one accepts that no ethical guidelines of a non-statutory nature can ever fully direct behaviour in every instance, a conceptual understanding of the ethical challenges and the possible solutions to these may aid practitioners in the ethical use of ACs. The problem may be compounded in the SA context where the population is very diverse in terms of race, religion, culture and socio-economic status (National Planning Commission, 2013), which implies that ethics and ethical challenges may be experienced and perceived differently by individuals from different groupings.
Comprehensive practice-informed guidelines, supported by a deep understanding of ethical challenges in the use of ACs in SA, may be even more important because of a history that did not always promote equity and the equal treatment of citizens within a context of social discrimination and exclusion. It may even have allowed the use of psychometric and similar assessments to the detriment of certain groups of the population, giving rise to questions regarding the procedural, interactional and distributive fairness of the resultant decisions (Coetzee & Schreuder, 2016; Donald, Thatcher & Milner, 2014; Foxcroft & Roodt, 2013).
It is against this background that this study was undertaken: If the ethical use of ACs in SA is not specifically regulated and if practice-informed guidelines are not compulsory and encompassing, would a better conceptual understanding of the ethical challenges pertaining to ACs not aid the creation of a framework within which the notion of ethics could be operationalised? Given also that ethics is a grey area (Pope & Vasquez, 1998) and there is often no clarity about what is right or wrong (Levin & Buckett, 2011), would new insights not bring about a greater awareness and unified understanding of the ethical challenges faced by AC practitioners in their day-to-day involvement with the AC method?
Research objectives
In light of the above, the search was for a deeper, evidence-based understanding of the ethical challenges that may exist in the use of ACs in SA. Believing that reality is created by the actors (social constructionism), it was decided to ask AC practitioners to share their insights based on their actual lived experiences. They were asked to delineate the notion of ‘ethics in the use of assessment centres’ and share their actual experiences of an ethical dilemma in this respect. In addition, they were required to evaluate and enrich recurring and outstanding themes derived from their input in the first phase of the study and recommend possible solutions to address the challenges in practice. This approach was chosen as certain scholars, for example, Cascio (1978), suggested that:
… ethical behaviour is not governed by hard-and-fast rules; rather it adapts and changes in response to social norms and in response to the needs and interests of those served by a profession. (p. 437)
From this point on, the following key terms will be used: (1) Assessor – the person who observes, records, classifies and evaluates the behaviours of the AC participants across competencies. In this paper, the term may – by implication – include AC administrators; (2) AC developer or designer – the person who conducts the job analysis and designs the AC accordingly; (3) Role-player – the person who plays the part of a specific character during an interactive simulation. In order to elicit competency-related behaviour from the participant, a role-player needs to provide stimuli to which the participant must react. The role-player is, therefore, part of the simulation design; (4) Participant – the individual taking part in the AC with the purpose of being assessed by the assessors; (5) Client – the person who requested the services of an AC provider; (6) AC provider – the person who offers ACs to clients; and (6) AC practitioner, which refers to all the roles above, with the exception of the participant and the client.
The objectives that were set for the study were, therefore: (1) to gain an overview of the ethical challenges that practitioners experience in their use of ACs; (2) to use practitioners’ input to develop a framework of understanding to guide practitioners in their endeavour to use ACs ethically, even in the absence of sufficient regulatory-legal or practice-informed guidelines and (3) to ask practitioners to forward suggestions to address the ethical challenges that were identified in the earlier stages of the study. It was believed that evidence-based insights could inform both the scientific and the practice communities in their search for ethical AC use.
Psychological and other assessments/measurements
Psychological assessment is defined as the:
process of measuring one or several variables of interest in order to make decisions about individuals or inferences about a population. It is the process of determining the presence of and/or the extent to which an object, person or group or system possesses a particular property, characteristic or attribute. (Moerdyk, 2009, p. 260)
Measurement is a:
logical process of assigning numbers to observations to represent the quantity of a trait or character possessed. It involves applying clearly stated rules that are public, transparent, unambiguous and agreed-upon by knowledgeable people to determine how much of some property or attribute is present in a particular object, system or process. (Moerdyk, 2009, p. 4)
The International Test Commission Guidelines on Test Use (International Test Commission, 2013) make it clear that the key principles of ‘testing’ would apply equally to all other forms of assessment (e.g. job selection interviews, job performance appraisals, diagnostic assessment of learning support needs) and would require from all practitioners to use assessment measures appropriately, professionally and in an ethical manner. Two criteria that are crucial for assessments and measurements to be fair are reliability and validity. Reliability is defined as ‘the degree of consistency of a measure and/or the degree to which it is free from random error’ (Moerdyk, 2009, p. 271) and validity as ‘the degree to which an instrument measures what it is intended or claims to measure’ (Moerdyk, 2009, p. 274). It is obvious that any AC will need to meet these criteria as a prerequisite for ethical use.
The notion of ethics
The terms ‘ethics’ and ‘morality’ are often used interchangeably. Despite this, many philosophers and researchers propose a distinction between these two concepts (Pojman, 1994). According to the latter perspective, morality relates to the underlying principles that distinguish between right and wrong, while ethics is more concerned with the specific standards of conduct acceptable to a group, a profession or members of an organisation (Adelman, 1991). Various scholars, including Reese (1980) as well as Hoffman and Frederick (1995), prefer not to differentiate between ethics and morality. They argue that, from an etymological point of view, the Latin word for ‘moral’ corresponds with the Greek word for ‘ethical’ (Lacey, 1976, p. 138). Hence, there is no need to differentiate between these two concepts and it is justifiable to use them interchangeably. Contemporary views of ethics and morality conceptualise an interplay between the two concepts and scholars – De George (1999) among others – assert that ethics is a systematic attempt to make sense of our individual and social moral experiences with the end goal of determining rules to govern our human conduct, desirable values and character traits that are worthy of development. A similar view is held by Wiley (1995), who defines ethics as a system of conduct based on moral obligations that indicate how we should behave, our responsibilities, social justice as well as the commitment to do what is right. This integrated view is supported by Rossouw and Van Vuuren (2014), who visualise the notion of ethics as a balance between being ‘good for self’ and ‘good for the other’. This view implies that our behaviour is ethical when it ensures that our actions are consistently good for all stakeholders in any given situation. Because of its simple yet encompassing nature, the latter definition was adopted as a working definition for the current study. It was perceived to fit well with a study focused on behaviour in the workplace when Coetzee and Schreuder (2010, p. 517) proposed that ethics ‘takes as its focus of interest right, wrong, good and bad in relation to behaviour in an organisational context’.
Ethical consideration
Ethical considerations were given due attention. The voluntary nature of participation in the research was made clear and participants were informed that they could withdraw at any time without any negative consequences to themselves. Confidentiality, anonymity and freedom from possible harm were unconditionally guaranteed and to this point, original data are being kept safe under the control of the researchers. Participants were treated with respect and discussions were positioned at a professional level throughout. Research protocol was followed to the best of our ability and well-respected research procedures were employed throughout.
Method
This study was broadly conceptualised from a social constructionist perspective, which implies that reality is constructed through human experience (Willig, 2001). This paradigm is aligned with the notion that there is no single view of reality (Myers, 2013) and that the real world is ‘shaped and constructed by our life experiences, value systems and cognitive schemata and categories that we bring to bear on issues’ (Moerdyk, 2009, p. 12). Those functioning within this paradigm mostly prefer to engage directly and closely with participants in an attempt to view the world from their subjective perspectives (Crotty, 2003; Esterberg, 2002; Myers, 2009). Meaning is created by the interpretations of those who personally experience the specific phenomenon (Shah & Corley, 2006).
Because the study was concerned with the concept of ethics which, by itself, cannot be fully understood or objectified (Levin & Buckett, 2011), it was essential to employ a research method that would allow for a thorough investigation and exploration of this phenomenon. A qualitative research method was preferred as it supported the goal of exploring and understanding a phenomenon within a particular context (Esterberg, 2002; Myers, 2013; Neuman, 2000).
Data collection
Several data collection strategies can be followed within a qualitative research approach. They include, but are not limited to, phenomenology, case studies, grounded theory, action research and ethnography (Denzin & Lincoln, 2005, 2008; Myers, 2013). This study was positioned within the realm of phenomenology, ‘which is interested in the world as it is experienced by human beings within particular contexts and at particular times’ (Willig, 2001, p. 51).
The study included two distinct phases of data collection: a qualitative self-report survey that allowed for access to data from a relatively wide sample of AC practitioners and a focus group discussion by a small group of subject-matter specialists to (1) supplement data, (2) assist in making sense of the data and (3) generate possible solutions to the challenges.
Qualitative survey
According to Groves et al. (2004):
the survey is a systematic method for gathering information from (a sample of) entities for the purpose of constructing quantitative descriptors of the attributes of the larger population of which the entities are members (p. 4).
Traditionally, the word survey covered only quantitative studies that were aimed primarily at describing numerical distributions of variables. More recently, qualitative surveys that do not aim to establish only frequencies or means but rather to determine the meaning and diversity of some topic of interest within a given population are widely used (Jansen, 2010). Unlike quantitative surveys, which are concerned mainly with generalising from the sample to the population, qualitative surveys are primarily concerned with understanding phenomena within their own contexts (Jansen, 2010; Neuman, 2000). A qualitative survey can thus be used as a data collection tool to collect self-report opinions and perceptions on a particular topic, as is the case in this study (Fink, 2003; Myers, 2009).
The qualitative survey was divided into three sections: (1) introduction, (2) biographical information and (3) survey content. As recommended by Denzin and Lincoln (2005), the introductory note stressed the importance of confidentiality and anonymity and further stated that participation in the study was voluntary. The biographical detail section was concerned with the participants’ personal information, with specific reference to experience in ACs, employment sectors and their levels of education. The third section comprised the actual research questions. The survey made it possible to elicit responses from a large number of respondents within a short period while at the same time affording opportunities to ask open-ended questions and probe for qualitative insights from the research participants. Two questions were posed: (1) List between five and eight single-word concepts or short phrases that you associate with the notion of ethical challenges in ACs, e.g. ‘fairness’, and (2) Briefly describe the single most profound incident when you experienced an assessment centre–related ethical challenge during your involvement with ACs over the years.
Despite its merit, the qualitative survey on its own was considered insufficient as it did not provide an opportunity to ask further questions, clarify meaning and confirm interpretations (Fink, 2003; Jansen, 2010). This was perceived to be a serious limitation, particularly within the interpretive paradigm where the need to probe, ask questions, clarify messages and confirm meaning is deemed essential (Esterberg, 2002). For this reason, a focus group was used as a supplementary strategy to obtain more data and to facilitate data enrichment. This decision was also aligned with the qualitative methodology, where the interpretations from both the participants and the researchers contribute to the process of establishing meaning (Willig, 2001). The involvement of a focus group was also perceived as a form of triangulation, which made it possible to view the topic from different angles (Willig, 2001). Triangulation in this context is ‘the process by which information is purposefully gathered across multiple measures, multiple domains, multiple sources, multiple settings, and on multiple occasions’ (Moerdyk, 2009, p. 273), the purpose being to seek converging information from different sources (Coetzee & Schreuder, 2010).
Focus group
A focus group is described as a purposefully planned and organised session during which participants come together, in the presence of a facilitator or moderator, to discuss issues or problems, at the same time providing possible solutions or recommendations on how to avoid and/or overcome these challenges (Duggleby, 2005). Focus groups have the added advantage of being fast, economical and efficient as a means to gain insights from multiple participants (Krueger & Casey, 2000; Willig, 2001). Another advantage of focus groups is that they are socially orientated in nature. This social element provides a sense of belonging to a group, which is likely to increase the participants’ sense of safety and cohesiveness and therefore makes it easier for them to express their views in this perceived safe environment (Levin & Buckett, 2011; Peters, 1993). This also allows points of view to be challenged, extended, developed, undermined or qualified to enrich the researcher’s data (Willig, 2001).
The focus group was responsible for scrutinising the findings based on the survey to: (1) comment on the face validity of the findings; (2) clarify and add insight to these findings; (3) suggest further themes that may have been missed during the survey phase; and (4) recommend possible solutions to minimise ethical challenges in the use of ACs in SA. The following questions were posed for open discussion: Based on your experiences and expertise: (1) Are the identified themes from the survey clear and relevant? (2) Are there any themes that you can clarify and refine? (3) Are there any further ethical challenges not reflected by the current themes that require our attention? (4) Can you provide possible solutions and/or recommendations for each theme?
Sampling
The overall research setting was the AC field of practice in SA and the primary focus was on the academic and business contexts within which ACs are regularly studied and used. A purposive sampling strategy was followed. Purposive sampling requires critical and deliberate thought and action aimed at ensuring the selection of participants who are best suited for answering a specific research question in a given study (Neuman, 2000). This technique was used because it allowed the researcher to identify and include participants who could clarify and deepen our understanding of the topic being researched (Denzin & Lincoln, 2008). Because exposure to ACs was a critical criterion for inclusion in the study, participants who were regarded as experts, regular users or informed stakeholders in the AC field were selected.
Participants: Qualitative survey
Participants in the first part of the data-gathering process included delegates who attended the ACSG Conference (2012) held in Stellenbosch (SA) and who voluntarily participated in the research. Access to the conference for research purposes was gained through the study supervisors and colleagues from the University of Johannesburg who were involved in organising the event. The rationale for including delegates from an AC conference was the assumption that people who attended an AC conference would be experienced, or interested, in the field of ACs and would, therefore, be well positioned to provide valuable insights and information. Appendix 1 presents a summary of the survey participants’ biographical details (n = 96). The following information is included: academic qualification and area of training, area of specialisation in ACs, number of years involved with ACs in a specific role or function, and relevant employment sector. Most participants held post-graduate degrees (more than 63% of the participants had master’s or PhD degrees) and almost 75% had degrees in psychology, including industrial and organisational as well as research psychology. Participants’ experience in the field of ACs ranged from being assessors to being involved in the design, execution and overall management of ACs. Many were involved in more than one area. Participants mostly worked in the areas of assessment, recruitment and development (more than 50%) and a reasonable number of academics (almost 10%) added their voice to the data. Most participants had more than 5 years’ exposure to ACs and some had been working in this field for more than 21 years. Participants worked for organisations (85%), in private practice (16%) and in academia (10%), with some working in more than one of the sectors. For purposes of future reference in the text, participants were numbered P1 to P96.
Participants: Focus group
The focus group consisted of 16 AC practitioners from different sectors of the economy. Access to the participants was gained through referrals and assistance from the study supervisors. Contact was made telephonically, via email or through personal contact. Participants were included on the basis of their exposure to the use of ACs, in different capacities, and the expectation that they could add insight and depth. Participation was voluntary.
Appendix 2 presents a summary of the focus group participants’ biographical characteristics (n = 16). The following information is included: academic qualification and area of training, area of specialisation in ACs, number of years involved with ACs, dominant area of work and relevant employment sector. The majority of the participants held a master’s or PhD degree in Industrial-organizational (IO) or clinical or research psychology (14 out of 16 participants). Most of the participants (11 out of 16) had more than 5 years’ experience in the design or execution of ACs, the overall management of ACs or being involved as assessors or observers in the AC process. The participants’ dominant areas of work were Human Resource Management (HR), recruitment, or talent management (n = 2), assessment (n = 11), teaching and research (n = 4) or more than one of the aforementioned. Focus group participants will be referred to as FGP further on in the text.
Differences of opinion exist with regard to the ideal size for a focus group. Morgan (2013) advised that the aim of the study and the amount that each participant has to contribute to the group are two major factors to consider when deciding on the group size. If the participants have a low level of involvement with the topic, it may be difficult to maintain an active discussion in a smaller group. Small groups (fewer than six participants) are also at risk of being less productive as members are sensitive to group dynamics and the opinions of other individual participants; large groups (more than 12 participants) may break up into simultaneous small conversations between neighbours at the table, which makes it difficult to follow and record what is being said. A moderate group size is recommended (Freeman, 2006; Morgan, 2013; Morgan & Bottorf, 2010). For logistical reasons, a decision was made to select the participants from two major cities. The focus group feedback from both sessions was consolidated to serve as one set of data.
During the focus group sessions, the researchers participated mainly as moderators or facilitators. Firstly, an overview of the study was provided to clarify concepts and definitions, and secondly, a clear process was delineated. The themes from the first phase were presented to both focus groups who were specifically requested to reflect on these, aid in clarifying the meaning of each and suggest themes that might have been missed in the first phase of the study. Focus group participants were then asked to suggest practice-informed recommendations to address the identified ethical challenges. We encouraged participants to engage and share their views. We facilitated the process by posing questions, asking for clarification and challenging specific points of view in order to stimulate a lively and critical climate. The focus group input was captured in personal notes and references. These were subsequently transcribed and analysed within the existing framework that emerged during the first phase of the data analysis process.
Data analysis
Data were analysed in two phases – first the survey data and thereafter the focus group data. In line with the principles of qualitative research, every effort was made to analyse the data in a manner that preserved the intended meaning (Flick, 2014). Two processes were involved in the analysis of the survey data: firstly, content analysis, because it is ‘the systematic, objective, quantitative analysis of message characteristics’ (Neuendorf, 2002, p. 1), and secondly, thematic analysis, which refers to ‘identifying, analysing and reporting patterns (themes) within data, implying the interpretation of various aspects of the research topic’ (Braun & Clarke, 2006, p. 79).
Units of analysis were either words or key-phrases-in-text. These were systematically and objectively coded and condensed to facilitate the identification of themes that reflect the essence of the data and represent supportive codes. In analysing the qualitative survey results, each participant’s responses were electronically captured – word for word. Each time a word or key phrase was repeated, a tick was made next to it in order to ascertain how many times that particular word or phrase had been mentioned. Similar words or phrases were then grouped together. Wherever necessary, certain words or phrases were consolidated to form a single word or phrase, and each time these were mentioned, a tick was made next to it to indicate frequency. These words or phrases were then further analysed to consider their inherent meanings. Similarities, or relatedness between words or phrases, were deliberately sought and units were then created on the basis of similarity and relatedness. This process required intense immersion into the data. This created an opportunity to engage with the content at a deeper level, making sense of the data from a participant’s perspective. This process allowed for the identification of first-level generic themes which, through a deeper second-level analysis, were clustered and consolidated into broader and more encompassing themes. These could be labelled, described and demonstrated by means of actual quotes from the data. The comprehensive and meaningful themes that emerged from this process were presented to the focus groups for deliberation.
Once the focus group sessions had been completed, the data from both sessions were consolidated into one, and subsequently integrated with the themes that arose from the first phase of data analysis – the survey data. In analysing the data captured during the focus group sessions, subjective judgements were made by the authors, who aimed to retain the research participants’ intended meaning while at the same time allowing a level of interpretation and integration. This process generated two outcomes – richly described final themes and recommendations for addressing ethical challenges attached to each of the specific themes.
Quality assurance
Four criteria that may be applied to ensure quality qualitative research were acknowledged: replicability, credibility, transferability and dependability (Golafshani, 2003; Shenton, 2004). The following actions were taken to meet these criteria: (1) a theoretical context for the study was created; (2) parameters for the setting, the prevailing social world and the population were provided; (3) an attempt was made to ensure a fair degree of representativeness in the sampling process; (4) each step of the research process was recorded in detail and scientific rigour was applied in all the steps; (5) data were captured, transcribed and safely stored for scrutiny at any time; (6) data were analysed and interpreted according to established processes and all efforts were made to limit researcher subjectivity; (7) findings were presented clearly in table format; and (8) the findings were presented to a focus group with a dual purpose – to contribute data in its own right and to act as a peer review mechanism to enhance confirmability. An attempt was made to triangulate the findings by discussing these within a context of multiple theoretical perspectives from various researchers in the field (see Mays & Pope, 2000; Shenton, 2004).
Findings
Table 1 presents 10 themes and their supporting codes, which emerged after a process of thematic content analysis of all the responses collected from the 96 survey participants and the 16 focus group participants. In-text words and key phrases (372 units of analysis) were systematically analysed and ordered into nine initial themes according to similarity and a clear congruence of meaning. Each focus group subsequently (1) considered the outcome of the content analysis and accepted the nine themes at face value; (2) clarified and added insight to these; (3) suggested a supplementary theme (Theme 10: Governance of ACs) and (4) recommended possible solutions to minimise ethical challenges faced in ACs in SA. A profound practice-based incident to demonstrate the essence of each theme is included in the table, and the frequencies with which each theme was raised are indicated. The themes are discussed in order from the highest to the lowest frequency. In each case, the essence of the theme is clarified and in each instance an ethical challenge would arise when an AC is misaligned to the inherent ethical values and principles that underlie each theme (fairness, for example).
TABLE 1: Ethical challenges faced by assessment centres (ACs) in South Africa. |
Discussion
Figure 1 presents a framework to aid an understanding of the ethical challenges in ACs. The framework incorporates various themes. Three themes external to an organisation are relevant, namely universal ethical values, a multicultural global context (specifically the socio-political-historical context in SA) and the regulatory framework for ACs in SA. Three themes have relevance within the immediate AC context, namely assessor competence, personal characteristics, moral character or ethical intent; psychometric properties of the AC itself; and participant characteristics. Three themes pertain to the client organisation, namely bias and prejudice, governance of the AC process and the ethical culture of the client organisation. The evasive nature of ethics as a construct is discussed separately.
|
FIGURE 1: Framework for understanding ethical challenges in assessment centres (ACs). |
|
Figure 1, viewed from the inside out, depicts a typical AC process entailing the interaction between an assessor and a participant within an AC. The psychometric properties of the AC and the conduct of all the AC stakeholders (assessors, role-players and participants) during the interactive parts of the AC may have a negative impact on the ethical character of the AC. Aspects of bias and prejudice – within both the immediate AC and broader organisational context – may threaten the overall fairness and justice envisaged for the process. The AC context is furthermore influenced by the ethical culture of the client organisation and the internal AC governance process, which often challenges the ethical implementation of an AC in various ways. The external socio-political-historical context and the regulatory environment for ACs in SA may also impact both the organisational and AC contexts – including all stakeholders – with regard to governance and ultimate fairness. The global influence exerts an overarching influence on all other levels and incorporates aspects related to globalisation, multiculturalism and universal ethical principles. Each of these themes is discussed within the context that was created by the research participants, making use of the actual research input. Existing insights based on a theoretical understanding of the topic is incorporated where applicable. In this respect researchers such as Dewberry and Jackson (2016) point to the need to incorporate multiple perspectives such as from AC designers, assessors and participants. Caldwell, Gruys and Thornton (2003) argue that multiple stakeholders are involved with ACs and that AC practitioners have implied duties to each stakeholder that might from time-to-time lead to ethical dilemmas. The notion to assign the responsibility for the ethical use of ACs to multiple stakeholders such as the client organisation, the assessors and participants will also be highlighted.
Universal ethical values in a multicultural global context
Adherence to universal ethical values was deemed essential for ensuring the ethical success of an AC. Universal ethical values refer to various constructs, which include morality, justice, equality, honesty, trust, respect, ‘doing no harm’, ‘doing good’, universal human rights and the notion of fairness. To achieve these objectives, the International Test Commission Guidelines on Test Use (International Test Commission, 2013) recommend adherence to, among others, the following principles: measures and results should be applied and used in a fair, professional and ethical manner; the needs and rights of people should be regarded with the utmost concern; the predictor should closely match the purpose for which the assessment results will be used; moderating factors that result from the social, political and cultural context of assessment should be taken into account, especially considering the impact these factors may have on the assessment results. The test guidelines make it clear that these universal testing principles should be honoured, irrespective of whether a measurement or assessment can strictly be classified as ‘test’ or not.
Adherence to universal ethical values has implications within a context of globalisation and multiculturalism, and this theme incorporates the following supporting codes: multicultural and global differences, need for cultural sensitivity, differences in terms of morality and a need for sensitivity regarding religion. Awareness is required when dealing with possible cultural differences (including language) and culturally sensitive areas (such as religion). While religious values and ethics as different concepts are sometimes used interchangeably, the focus group provided a working distinction between the two, as this may apply to the AC context. Religion refers to a person’s religious beliefs and value systems, whereas ethics, refers to a set of ethical guidelines that apply to a specific context within a framework of universal ethical or moral values. To avoid the possible alienation of individuals from the AC process, AC practitioners were advised to avoid religious content in ACs and to refer to ethical guidelines rather than religious values, belief systems and customs in their deliberations.
Fairness within a socio-political-historical context in South Africa
Foxcroft and Roodt (2013) maintained that it would probably be futile to try and understand ethical challenges in ACs in SA without considering specific socio-political-historical and cultural factors that influence almost every facet of SA society. These authors explained that testing and assessment are essentially Western-world activities that were brought to Africa during the colonial era and therefore not naturally part of the African culture. Any efforts that might have been made to change or adapt assessments to suit local social conditions were not made until much later (Foxcroft, 1997; Foxcroft & Roodt, 2013). The development of assessments and testing almost inevitably reflected the racially segregated society it had evolved from (Foxcroft & Roodt, 2013). This may imply that assessments in the past were probably not always applied fairly and appropriately for all members of the SA society. It is against this background that it was agreed that all AC stakeholders have a responsibility to ensure that present-day ACs are fair and free from bias. This approach is aligned to international and national best practice guidelines that call for contextual adaptation in terms of social, political, institutional, linguistic and cultural differences (Assessment Centre Study Group Taskforce on Assessment Centres in South Africa, 2015; International Taskforce on Assessment Center Guidelines, 2015; International Test Commission, 2013).
In support of this search for fairness, the Society for Industrial and Organisational Psychology of South Africa has identified four criteria for fairness in the context of selection: equal outcomes, equitable treatment, equal opportunities to experience and learn from situations and absence of predictive bias (Society for Industrial and Organisational Psychology of South Africa, 2005). It is evident that all ACs should at the very least meet these criteria. It is also important to note that fairness is a social and not a scientific judgement and cannot be measured against scientific parameters, unlike reliability and validity that, in the context of assessment/measurement, can to a high degree be measured thus. What is important in this sense is that perceptions of fairness in the SA context – specifically in the light of our history – may imply perceived fairness as a major component or dimension of fairness in general (Donald et al., 2014).
ACs are acknowledged as a means to increase fairness because they create opportunities for people to be observed in simulated (work-related) environments (Moerdyk, 2009). Relatively small subgroup differences and minimal adverse impact in selection have been found when compared with the results of traditional selection measures (Kriek, 1991; Thornton et al., 2015), and Kriek, Hurst and Charoux (1994) reported that ACs also appear to be relatively culture-fair. In any event, it is important to remember that assessment is intended to fairly discriminate between people, not against people (Moerdyk, 2009), and that fairness is especially important where equal validity of measurement for a variety of different groups of people should exist – as is the case in SA.
Regulatory-legal framework for assessment centres in South Africa
Participants referred to many examples of legislation and controversies around these that need to be clarified among all stakeholders. A first concern dealt with whether ‘ACs should (in future) be classified as a psychological act as defined by the HPCSA or not’. While it was generally agreed that it would probably be ideal to place ACs within the domain of psychological acts, practical realities would make this impossible. The most important limitation would be that SA does not have sufficient numbers of psychologists to accept responsibility for all aspects of all ACs from start to end. For this reason, a compromise may need to be negotiated. While non-psychologists could possibly be trained to take responsibility for some aspects of an AC, depending on their complexity, it was recommended by some research participants that only qualified psychologists should be allowed to take overall responsibility for the facilitation of an AC, including aspects related to the interpretation of behaviour. This recommendation was not supported by all the participants and a proper debate that involves a wider group of AC stakeholders may be warranted in the future. Meanwhile, it was agreed that comprehensive practical guidelines need to be maintained and that ethical awareness should be fostered to serve as a moral compass whenever practitioners do not have a regulatory-legal or practice-informed framework to guide them.
A further concern dealt with the question of whether an AC (in future) should ever be classified as a psychometric test. In this respect, the Guidelines for Best Practice Use of the Assessment Centre Method in South Africa (5th edition) (Assessment Centre Study Group Taskforce on Assessment Centres in South Africa, 2015) state that ACs are not single tests, but rather a sequence of stimuli eliciting participant behaviour that can be linked to competencies, skills and work-related constructs. Such ACs are not psychological tests. When an AC is used as part of a selection process, it has to comply with the EEA’s requirements of validity, reliability, fairness and lack of bias. It furthermore should measure aspects inherently required for job performance based on information obtained from a thorough job analysis. To add to the scientific rigour of the process, an AC design model is recommended by Schlebusch and Roodt (2008) that incorporates four distinct design phases: analysis, design, implementation and evaluation, and validation. By paying adequate attention to each phase, the psychometric properties and overall fairness of the process may be enhanced. If a psychological construct is measured during an AC by means of an appropriate psychological test, the measurement should adhere to any legal requirements that may pertain to psychological tests in that context.
A final concern related to contemporary notions of the construct validity in the context of assessment. While contemporary scientific evidence implies a continuum of evidentiary support to confirm validity (as a multidimensional construct), the EEA – in both its original and amended versions – places upon practitioners an unrealistic demand with which they have to comply, namely that of absolute validity: a dichotomy of a test or measure being either valid or invalid. Research participants highlighted their vulnerability in this respect: Would it actually be possible to provide what is legally required of them? Participants agreed that this controversy needs to be addressed in the appropriate forums in future, to address the professional risk that practitioners face in not being able to comply with that legal requirement in its current form.
The focus group concluded that overall ethical awareness to guide behaviour when neither rules and regulations nor practice-informed guidelines exist would be crucial. It was suggested that international cooperation be sought to help develop globally applicable guidelines and that an international registration body for ACs be established to monitor ACs and ensure adherence to minimum standards.
Assessor competence, personal characteristics, moral character and ethical intent
The very important role of the assessor in ensuring the success of an AC was strongly emphasised by all the research participants. The following were deemed to be of crucial importance: (1) the moral character or integrity of assessors; (2) their professional competence; (3) their personal characteristics, such as humility, agreeableness and tolerance; and (4) their ethical intent, including ethical competence and leadership. Assessor competence is important because, unlike other forms of assessment (for instance, psychological testing where the primary tool of assessment is a test or instrument), the primary tool of assessment in ACs is the assessor (Foxcroft & Roodt, 2013: Howard, 2008). AC practitioners’ knowledge and their personal and scientific frames of reference will influence the selection and interpretation of assessment measures and outcomes (Bergh & Theron, 2009). An assessor may also unconsciously impact the test results through, for example, body language, facial expressions, failure to establish rapport, failure to set participants at ease and style of presentation (Coetzee & Schreuder, 2010; Moerdyk, 2009). Furthermore, an assessor’s ethical intent and awareness of possible bias and prejudice in measuring and scoring participants are essential in ensuring valid, reliable and fair AC outcomes. Within fast-paced, ambiguous and changing organisational contexts, the impact of a person’s values may be especially powerful (Illies & Reiter-Palmon, 2007).
A disconcerting finding based on the outcome of this study was the notion that AC practitioners sometimes allow managers to influence AC processes and outcomes because, as one focus group participant put it:
‘They don’t want to bite the hand that feeds them’. [FGP]
Although it is important to have healthy relationships with clients, AC practitioners were advised not to:
‘chase contracts at the expense of effectiveness’. [FGP]
In other words, customisation should not compromise quality.
Psychometric properties of an assessment centre
Sound psychometric properties of ACs refer to criteria such as reliability and consistency; face, construct and content validity; fit-for-purpose, clear and measureable focal constructs and measurement criteria; good quality exercises and tools; proper scoring mechanisms; appropriate norms; and adequate data management processes. The study confirmed the view that ethical challenges in ACs in SA are often linked to the quality of tools and the relevance of the instruments being used (Müller & Roodt, 2013). Krause et al. (2011) indicate, for example, that approximately one third of AC content used in SA is developed overseas and then imported for local use. This presents a major ethical challenge, especially because cultural and linguistic factors may influence AC results. It confirms the need for continuous research to validate a specific AC for the use within the context it is intended for, answering the question: ‘valid for what?’ (Roodt, de Kock & Schlebusch, 2013).
Measurement errors (differences between the results obtained and the real results) may also occur (Bergh & Theron, 2009). These errors may occur as a result of the research process, the measurement instrument itself, instrument administration, scoring, participants, the nature of the concepts being measured, weaknesses in the measurement techniques, rating or observation errors, and subjective errors made by the researchers (Bergh & Theron, 2009; Moerdyk, 2009). Van Vuuren and Schlebusch (2013) highlight the fact that many ethical challenges are caused by inadequate training, which results in AC data not being captured accurately and on time; statistical validity and reliability assumed to exist, or being misinterpreted; incorrect interpretation of statistical calculations; and incorrect generalisability (Roodt et al., 2013).
In this regard, the focus group was particularly concerned about the quality of AC training:
What many practitioners have not taken notice of, though, is the fact that the more prevalent types of assessor training are not very effective, i.e. avoiding rating errors and behaviour observation training. Good, solid frame-of-reference training is the most effective form of assessor training. Normally, a careful follow-up of rater performance should be conducted as part of validation, i.e. assessment of inter-rater reliability, rater idiosyncrasy, rater error (halo, elevation error, etc.). [FGP]
It has to be noted that research in this respect indicates a preference for frame-of-reference training for assessors working at an AC for selection purposes and that a data-driven approach is better suited for ACs with a developmental purpose (Lievens, 2001). A combination of the two assessor training approaches might be more desirable (Thornton et al., 2015).
Participants
It was widely accepted that participants bring their own set of values into the AC context, which may influence the ethical properties of the AC either positively or negatively. The underlying assumption was that ethical AC outcomes could be achieved only if assessors and participants collectively embraced universal ethical values. It was suggested that participant characteristics could influence the success and ethical character of an AC. It is believed that the validity and reliability of AC outcomes can be influenced by a wide range of participant characteristics, including mood fluctuations, ability to concentrate, physical health, family problems, emotional distress, levels of fatigue, unfamiliarity with the context, lack of test-wiseness or test sophistication, competitiveness and motivation (Coetzee & Schreuder, 2010; Moerdyk, 2009).
Specific types of participant behaviour that could potentially jeopardise the ethical character of an AC were also highlighted. These included a person’s responses in terms of image management, second-guessing, social desirability, deliberate distortion or faking, and resistance. While the focus group confirmed that manipulative and faking behaviour is a common challenge in ACs in SA, literature suggests that this phenomenon is not exclusive to SA, but is in fact a global challenge (Schollaert & Lievens, 2011; Thornton & Gibbons, 2009; Thornton et al., 2015).
Bias and prejudice
The concepts of prejudice, bias and fairness are closely related (Donald et al., 2014; Foxcroft & Roodt, 2013; Muchinsky et al., 2005). While bias within the context of psychological measurement is defined as a ‘systematic error in measurement or research that affects one group (e.g. race, age and gender) more than another’ (Moerdyk, 2009, p. 261), prejudice refers to the same phenomenon but in the broader context of human assessment (Muchinsky et al., 2005). Bias and fairness may, among others, be based on considerations of sex, race, religion or national origin (Health Profession Act, No. 56 of 1974) and both bias and prejudice may lead to favouritism, unfairness and injustice (Donald et al., 2014). Both intentional and unintentional bias and prejudice towards an individual or group, especially in a manner considered to be unfair, is a major contributor to the occurrence of disparate treatment, adverse impact and discrimination in the use of assessment generally (Donald et al., 2014; Foxcroft & Roodt, 2013; Moerdyk, 2009; Muchinsky et al., 2005).
Bias, prejudice and unfairness in the use of ACs could result from many variables such as the attitudes, values and judgements of the assessor (Kuncel & Highhouse, 2011; Muchinsky et al., 2005), properties of the measures (Buckett, Becker & Roodt, 2017; Foxcroft & Roodt, 2013; Moerdyk, 2009), contextual realities embodied in the socio-political world at the time (Donald et al., 2014; Foxcroft & Roodt, 2013), cultural preferences for certain personality characteristics such as bias towards extroverts (Collins et al., 2003; Crawley, Pinder & Herriot, 1990; De Beer, 2012; Furnham, Jensen & Crump, 2008; Jackson et al., 2010) and a tendency towards cloning (Bagues & Perez-Villadoniga, 2013; Fiske, 1999). In ACs specifically, preconceived ideas, different points of view, different interpretations and differences in scoring and scoring methods, especially where judgement is concerned, may lead to disparate outcomes (Kuncel, Klieger, Connelly & Ones, 2013). This may increase error and decrease reliability (Roch, 2006; Simonenko, Thornton, Gibbons & Kravtcova, 2013), highlighting a need for rigour and proper standardisation (Kuncel, Klieger & Ones, 2014).
It is against the SA historical background specifically that the focus group also argued for heterogeneous assessor teams in terms of race, age, gender and ethnicity. This is to ensure that AC participants (who themselves are likely to be heterogeneous) ‘perceive’ the AC process as fair and representative. This would also strengthen an inherent feature of the AC process, which is to use multiple assessors in order to view the same behaviour(s) from multiple perspectives and thus gain a better understanding of participant behaviour (Assessment Centre Study Group Taskforce on Assessment Centres in South Africa, 2015; Falk & Fox, 2014; International Taskforce on Assessment Center Guidelines, 2015).
Governance of the assessment centre process
An overall robust and ethical process was envisaged by all the survey participants. Ethical considerations are perceived to arise in all the phases of the AC process – from the contracting or analysis phase to the implementation phase and to the termination phase. This thinking is aligned with that of Van Vuuren and Schlebusch (2013), who suggest that the contracting or analysis phase should be used to clarify the roles and responsibilities of all the stakeholders, set the required parameters and agree on expectations and the decision-making process (including the relative weight of the AC outcome in the final decision). The assumption was made that by paying sufficient attention to the contracting phase, many ethical challenges, for example, manager interference, could be minimised. It was furthermore suggested that the contracting phase be documented and signed off by all parties to avoid future misunderstanding.
Many ethical challenges also arise during the AC implementation phase. Firstly, not all activities constitute an AC. The International Taskforce on Assessment Center Guidelines (2015) highlighted this issue and stated that, while many activities (e.g. computerised in-baskets requiring multiple-choice responses only) may have AC characteristics, they are not necessarily simulations used as part of ACs. AC practitioners should, therefore, guard against the use of non-AC methods under the guise of an AC. Secondly, process-related challenges arise. These may refer to aspects including the administration of measurement, the observation and interpretation of behaviour, and the overseeing of the entire AC. Details such as suitable venues, contingency plans, standardised procedures, adequate manuals and sufficiently competent assessors were deemed important. Further basic requirements, which are inherently informed by participant rights, were raised, that is the right to transparency and openness among all AC stakeholders, informed consent from participants prior to assessment, agreed-upon levels of confidentiality and valid and constructive feedback. With regard to the organisation, return-on-investment considerations were emphasised throughout.
The project closure is believed to present its own ethical challenges, which are primarily related to feedback and the data management processes. With regard to data management, the focus group raised and answered the following questions:
- Who owns the data?
- How long should the data be stored?
- Who is allowed access to what information?
Upon completion of the AC, practitioners were advised to reflect on the overall AC, learn from mistakes and capitalise on strengths to improve their success in the future.
Ethical culture of the client organisation
This theme raised awareness of the influence that the organisational culture could potentially have on ACs. ACs often take place at the premises of the client organisation and even if this is not the case, they are mostly conducted on behalf of a particular organisation. Organisations are therefore important stakeholders and an integral part of any AC (Krause et al., 2011; Schlebusch & Roodt, 2008; Thornton et al., 2015). A pertinent ethical challenge that emerged from the study was the interference of managers with AC processes, often with the intention of influencing outcomes. The following example of a profound incident illustrated this point:
A CEO asked us to change the scores of one of his executive managers who was his ‘blue-eyed boy’ when we were assessing for succession planning. The executive was a clone of the CEO and he scored relatively weakly on the AC, but the CEO wanted to appoint him to take over anyway but wanted us to change the scores to justify the appointment. [P74]
According to the focus group, managers may sometimes attempt to implicitly influence processes and outcomes. In other words, because they are unlikely to explicitly ask for favours, they may attempt to influence matters indirectly. Practitioners were advised to note that some managers may believe that they have a right to influence AC outcomes and processes simply because they pay for the AC. However, the mere fact that managers pay for the AC does not mean that they own the process. Practitioners were therefore advised to ensure that they clarify their role and the role of managers and clients during the contracting phase to avoid unwarranted interferences at a later stage.
Organisations are important stakeholders and an integral part of any AC (Krause et al., 2011; Schlebusch & Roodt, 2008; Thornton & Rupp, 2006). The ethical character of an AC may often be negatively influenced by an organisation’s lack of an overarching ethical culture. To oppose a culture of deception, manipulation, dishonesty and ill intent, regard for ethical decision-making, as well as the strategic integration of ethics into the organisation’s culture and openness to the consideration of ethics by management in all AC processes and decisions were deemed imperative. In this respect, organisational politics – which is defined by Coetzee and Schreuder (2010) as ‘self-serving actions to affect behaviour of others to achieve personal goals’ (p. 520) – may potentially lead to tension between professional ethics and organisational expectations (Muleya, Fourie & Schlebusch, 2013).
Evasive nature of ethics as a concept
Research participants mentioned the evasive nature of ethics as a concept and raised a number of questions:
- Can one assess ethics?
- Is it necessary to assess ethics?
- How do you develop tools for assessing ethics?
- How do you use norms?
- From where do we derive norms?
The focus group acknowledged the challenges in this regard and suggested a number of strategies: Have a multifaceted view on ethics; seek to understand the notion of ethics and the challenges in defining, measuring and defining ethics within a specific context; decide on what is good and ethical within contextual realities; and acknowledge that what is right for you may not be right for another. The focus group discussion highlighted the need for research in this field and emphasised the imperative to seek for universal ethical guidelines on the use of ACs, at the same time allowing for contextual differences, globally and locally. The existing AC guidelines describe the ‘what’ of ACs and to some extent the ‘how’. A Code of Ethics for ACs in SA should arguably describe the ‘how’ in more detail as to serve as an aspirational and directional guide when the AC practitioner interacts with all AC stakeholders (Meiring et al., 2016).
Conclusion
This study investigated ethical challenges in ACs in SA. The findings based on the results of the study revealed a number of ethical challenges and dilemmas, largely in line with research in this field (Caldwell et al., 2013; Levin & Buckett, 2011; Roodt et al., 2013). The results from this study are presented in the form of a conceptual framework to provide a lens through which these challenges could be viewed and understood. The study also produced 93 practice-informed recommendations for minimising these challenges (see Appendix 3). In addition to providing practice-informed recommendations, the study should serve to enrich the existing body of knowledge, activate constructive debate and lay a foundation for future research. In this regard, the following suggestions are made: Firstly, the study focused exclusively on the views of AC practitioners. The lived experiences of AC participants should also be explored to obtain a more balanced view. Secondly, insights into the lived experiences of AC clients may well highlight further areas of ethical risk. Finally, AC assessors need to be questioned on the nature of their organisational-professional conflicts to enhance insights into the real ethical dilemmas that arise within these relationships. The following conclusions drawn from this study are presented to the AC community:
- The proposed conceptual framework of ethical challenges in ACs may serve to guide stakeholders’ ethical awareness when using ACs.
- Whether defined as a psychological test or not, the criteria of validity and reliability are applicable to all aspects of ACs.
- The notion of fairness in the application of ACs is non-negotiable. In the SA context, fairness may be conceptualised at the level of procedural, interactional and distributive justice.
- Regulatory-legal uncertainties regarding psychological assessment in the SA context need to be clarified for the benefit of all stakeholders. This pertains to two aspects: (1) the responsibility of the HPCSA (or any other body) to regulate psychological tests and/or other forms of assessment including ACs and (2) the concept ‘validity’ as conceptualised in the EEA.
- Existing best practice guidelines for the use of ACs (locally and internationally) need to be internalised and implemented by all AC practitioners, and existing guidelines need to be benchmarked and improved on an ongoing basis.
- International and local guidelines need to accommodate the realities of globalisation and emerging thought leadership in this domain.
- Existing best practice guidelines for the use of ACs need to be supplemented by an aspirational code of ethics that inspires practitioners ‘to do the right thing’. Such a newly developed aspirational code should embody – at a minimum – the notions of procedural, interactional and distributive justice.
- The establishment of a central body of governance for ACs in SA may need to be considered.
- An international body for the registration of ACs to ensure adherence to minimum standards may improve ethical AC use globally.
- The notion that all AC stakeholders accept joint responsibility for the ethical nature of an AC needs to be propagated.
- ‘Intellectual property’ as this may pertain to ACs needs to be explored and clarified.
These conclusions may serve as guidelines for AC practitioners and further stimulate academic debate regarding ethical challenges in the use of ACs – both globally and in SA.
Acknowledgements
Competing interests
The authors declare that they have no financial or personal relationship(s) which may have inappropriately influenced them in writing this article.
Authors’ contributions
This research was a coordinated team effort. Mr V.R. Muleya was responsible for the completion of the study as part of a master’s degree in Industrial Psychology at the University of Johannesburg. He conducted the initial literature review, collected and analysed the data and wrote up the findings. Dr L. Fourie and Ms S. Schlebusch supervised the study. This included the conceptualising of the study, obtaining entrance to the research setting, managing the research process and making sense of the data. This article, which is the product of a substantial re-work of the original dissertation, involved expanding the literature review, rethinking the findings and re-presenting the conclusions and recommendations in a condensed and user-friendly format.
References
Adelman, H. (1991). Morality and ethics in organisation administration. Journal of Business, 10(2), 665–678. https://doi.org/10.1007/BF00705873
Aguinis, H., Joo, H., & Gottfredson, R.K. (2011). Why we hate performance management – And why we should love it. Business Horizons, 54, 503–507. https://doi.org/10.1016/j.bushor.2011.06.001
Assessment Centre Study Group Taskforce on Assessment Centres in South Africa. (2015). Guidelines for best practice use of the assessment centre method in South Africa. (5th edn.). Retrieved October 22, 2016, from http://www.acsg.co.za/ac_information/guidelines/AC-Guidelines-SouthAfrica_16032015_FINALDRAFT.pdf
Bagues, M., & Perez-Villadoniga, M.J. (2013). Why do I like people like me? Journal of Economic Theory, 148(2013), 1292–1299. https://doi.org/10.1016/j.jet.2012.09.014
Bergh, Z., & Theron, A.L. (Eds.). (2009). Psychology in the work context. (4th edn.). Cape Town: Oxford University Press.
Botha, W. (2016). Best practice guidelines of the assessment centre method in South Africa: A practitioner’s view. Paper presented at the 2016 Annual Assessment Centre Study Group Conference. Somerset-West, South Africa.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101. https://doi.org/10.1191/1478088706qp06oa
British Psychological Society (Division of Occupational Psychology). (2015). The design and delivery of assessment centres. Leicester, UK: British Psychological Society.
Brits, N.M., Meiring, D., & Becker, J.R. (2013). Investigating the construct validity of a development assessment centre. SA Journal of Industrial Psychology, 39(1), 1092–1103. https://doi.org/10.4102/sajip.v39i1.1092
Buckett, A., Becker, J.R., & Roodt, G. (2017). General performance factors and group differences in assessment centre ratings. Journal of Managerial Psychology, 32(4), 298–313. https://doi.org/10.1108/JMP-0802916-0264
Caldwell, C., Gruys, M.L., & Thornton, G.C., III. (2003). Public safety assessment centers: A steward’s perspective. Public Personnel Management, 32(2), 229–249. https://doi.org/10.1177/009102600303200204
Cascio, W.F. (1978). Applied psychology in personnel management. (4th edn.). Englewood Cliffs, NJ: Prentice Hall.
Coetzee, M., & Schreuder, D. (Eds.). (2010). Personnel psychology: An applied perspective. Cape Town: Oxford University Press.
Coetzee, M., & Schreuder, D. (Eds.). (2016). Personnel psychology: An applied perspective (2nd ed.). Cape Town: Oxford University Press.
Collins, J.M., Schmidt, F.L., Sanchez-Ku, M., Thomas, L., McDaniel, M.A., & Le, H. (2003). Can basic individual differences shed light on the construct meaning of assessment centre evaluations? International Journal of Selection and Assessment, 11(1), 17–29. https://doi.org/10.1111/1468-2389.00223
Constitution of the Republic of South Africa, Act No. 108 of 1996. Pretoria: Government of South Africa.
Consumer Protection Act, No. 68 of 2008. Pretoria: Government of South Africa.
Crawley, B., Pinder, R., & Herriot, P. (1990). Assessment centre dimensions, personality and aptitudes. Journal of Occupational Psychology, 63, 211–216. https://doi.org/10.1111/j.2044-8325.1990.tb00522.x
Crotty, M. (2003). The foundations of social research: Meaning and perspective in the research process. London, UK: Sage.
De Beer, E. (2012). The influence of introversion/extroversion bias on leadership assessment with behaviour observation. Unpublished master’s dissertation, University of Pretoria. Retrieved June 19, 2016, from http://hdl.net/2263/23650
De George, R.T. (1999). Business ethics. (5th edn.). Upper Saddle River, NJ: Prentice-Hall.
Denzin, N.K., & Lincoln, Y. (2005). Handbook of qualitative research. Thousand Oaks, CA: Sage.
Denzin, N.K., & Lincoln, Y. (2008). Collecting and interpreting qualitative material. Los Angeles, CA: Sage.
Dewberry, C., & Jackson, D.J.R. (2016). The perceived nature and incidence of dysfunctional assessment center features and processes. International Journal of Selection and Assessment, 24(2), 189–196. https://doi.org/10.1111/ijsa.12140
Donald, F., Thatcher, A., & Milner, K. (2014). Psychological assessment for redress in South African organisations: Is it just? South African Journal of Psychology, 44(3), 333–349. https:///doi.org/10.1177/0081246314535685
Duggleby, W. (2005). What about focus group interaction data? Qualitative Health Research, 15(6), 832–840. https://doi.org/10.1177/1049732304273916
Elias, S.M. (2013). Deviant and criminal behaviour in the workplace. New York: NYU Press.
Employment Equity Act, No. 55 of 1998, Section 8. Pretoria, Republic of South Africa.
Employment Equity Act, No. 55 of 1998, Section 8 (as amended by Act No. 47 of 2013). Pretoria, Republic of South Africa.
Esterberg, K.G. (2002). Qualitative methods in social research. Toronto, Canada: McGraw-Hill.
Falk, A., & Fox, S. (2014). Gender and ethnic composition of assessment centers and its relationship to participants’ success. Journal of Personnel Psychology, 13(1), 11–20. https://doi.org/10.1027/1866-5888a0001000
Fink, A. (2003). The survey handbook. Thousand Oaks, CA: Sage.
Fiske, P. (1999). Bias: Identifying, understanding and mitigating negative biases in your job search. Science, 1999(10). Retrieved June 04, 2016, from http://www.sciecemag.org-careers-mitigating-negative-bias-your-job-search
Flick, U. (2014). An introduction to qualitative research. (5th edn.). Los Angeles, CA: Sage.
Foxcroft, C.D. (1997). Psychological testing in South Africa: Perspectives regarding ethical and fair practices. European Journal of Psychological Assessments, 13(3), 229–235. https://doi.org/10.1027/1015-5759.13.3.229
Foxcroft, C.D., & Roodt, G. (Eds.). (2013). Introduction to psychological assessment in the South African context. (4th edn.). Cape Town: Oxford University Press.
Freeman, T. (2006). ‘Best practice’ in focus group research: Making sense of different views. Journal of Advanced Nursing, 56(5), 491–497. https://oi.org/10.1111/j.13652648.2006.04043.x
Furnham, A., Jensen, T., & Crump, J. (2008). Personality, intelligence and assessment centre expert ratings. International Journal of Selection and Assessment, 16(4), 356–365. https://doi.org/10.1111/j.1468-2389.2008.00441.x
Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8(4), 597–607. Retrieved October 06, 2014, from http://www.nova.edu/sss/QR/QR8-4/golafshani.pdf
Groves, R.M., Fowler, F.J., Couper, M.P., Lepkowski, J.M., Singer, E., & Tourangeau, R. (2004). Survey methodology. Hoboken, NJ: John Wiley & Sons.
Hagan, C.M., Konopaske, R., Bernadin, H.J., & Tyler, C.L. (2006). Predicting assessment centre performance with 360-degree, top-down, and customer-based competency assessments. Human Resource Management, 45(3), 357–390. https://doi.org/10.1002/hrm.20117
Health Profession Act, No. 56 of 1974. Pretoria, Republic of South Africa.
Health Profession Act, No. 56 of 1974 (as amended by Act No. 29 of 2007). Pretoria, Republic of South Africa.
Health Profession Act, No. 56 of 1974 (Notice R717 of 2006). Pretoria, Republic of South Africa.
Hermelin, E., Lievens, F., & Robertson, I.T. (2007). The validity of assessment centres for the prediction of supervisory performance ratings: A meta-analysis. International Journal of Selection and Assessment, 15(4), 405–411. https://doi.org/10.1111/j.1468-2389.2007.00399.x
Hoffman, W.M., & Frederick, R.E. (1995). Business ethics. Reading and cases in corporate morality. (3rd edn.). New York: McGraw-Hill.
Howard, A. (2008). Making assessment centres work the way they are supposed to. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(1), 98–104. https://doi.org/10.1111/j.1754-9434.2007.00018.x
Illies, J.J., & Reiter-Palmon, R. (2007). Responding destructively in leadership situations: The role of personal values and problem construction. Journal of Business Ethics, 82, 251–272. https://doi.org/10.1007/s10551-007-9574-2
International Taskforce on Assessment Center Guidelines. (2015). Guidelines and ethical considerations for assessment center operations. (6th edn.). Retrieved February 13, 2016, from http://www.assessmentcenters.org/Assessmentcenters/media/2014/2014-Final-Presentations/International-AC-Guidelines-6th-Edition-2014.pdf
International Test Commission. (2013). International Test Commission Guidelines on Test Use. Retrieved from https://www.intestcom.org/files/guideline_test-use_pdf
Jackson, D.J.R., Michaelides, G., Dewberry, C., & Kim, Y.-L. (2016). Everything that you have ever been told about assessment center ratings is confounded. Journal of Applied Psychology, 101(7), 976–994. https://doi.org/10.1037/apl0000102
Jackson, D.J.R., Stillman, J.A., & Englert, P. (2010). Task-based assessment centers: Empirical support for a systems model. International Journal for Selection and Assessment, 18(2), 141–154. https://doi.org/10.1111/j.1468-2389.2009.00467.x
Jansen, H. (2010). The logic of qualitative survey research and its position in the field of social research methods. Forum: Qualitative Social Research, 11(2), Art. 11. Retrieved July 14, 2015, from http://nbn-resolving.de/urn:nbn:de:0114-fqs1002110
Krause, D.E., Rossberger, R.J., Dowdeswell, K., Venter, V., & Joubert, T. (2011). Assessment centre practices in South Africa. International Journal of Selection and Assessment, 19(3), 262–275. https://doi.org/10.1111/j.1468-2389.2011.00555.x
Kriek, H.J. (1991). Die bruikbaarheid van die takseersentrum: ‘n Oorsig van resente literatuur. SA Journal of Industrial Psychology, 17(3), 34–37. https://doi.org/10.4102/sajip.v17i3.533
Kriek, H.J., Hurst, D.N., & Charoux, A. (1994). The assessment centre: Testing the fairness hypothesis. SA Journal of Industrial Psychology, 20(2), 21–25.
Krueger, R.A., & Casey, M.A. (2000). Focus groups: A practical guide for applied researchers. (3rd edn.). Thousand Oaks, CA: Sage.
Kuncel, N.R., & Highhouse, S. (2011). Complex predictions and assessor mystique. Industrial and Organisational Psychology, 4(2011), 302–306. https://doi.org/10.1111/j.1754-9434.2011.01343.x
Kuncel, N.R., Klieger, D.M. Connelly, B.S., & Ones, D.S. (2013). Mechanical versus clinical data combination in admissions decisions: A meta-analysis. Journal of Applied Psychology, 98(6), 1060–1072. https://doi.org/10.1037/a0034156
Kuncel, N.R., Klieger, D.M., & Ones, D.S. (2014). In hiring, algorithms beat instinct. Harvard Business Review, 92(5), p32–32.3/4p. HBR Reprint, F14050
Labour Relations Act, No. 66 of 1995. Pretoria, Republic of South Africa.
Lacey, A.R. (1976). A dictionary of philosophy. Boston, MA: Routledge.
Levin, M.M., & Buckett, A. (2011). Discourse regarding ethical challenges in assessments – Insights through a novel approach. SA Journal of Industrial Psychology, 37(1), 1–13. https://doi.org/10.4102/sajip.v37i1.949
Lievens, F. (2001). Assessor training strategies and their effects on accuracy, inter-rater reliability, and discriminant validity. Journal of Applied Psychology, 86, 255–264. https://doi.org/10.1037/0021-9010.86.2.255
Mays, N., & Pope, C. (2000). Qualitative research in health care: Assessing quality in qualitative research. British Medical Journal, 320, 50–52. https://doi.org/10.1136/bmj.320.7226.50
Meiring, D., & Buckett, A. (2016). Best practice guidelines for the use of the assessment centre method in South Africa (5th ed.). South African Journal of Industrial Psychology, 42(1), Art. #1298. https://doi.org/10.4102/sajip.v42i1.1298
Meiring, D., Schlebusch, S., & Lowman, R.L. (2016). A code of Ethics for Assessment Centre Practice. 36th Annual Assessment Centre Study Group of South Africa Conference, Somerset West, South Africa.
Meriac, J.P., Hoffman, B.J., Woehr, D.J., & Fleisher, M.S. (2008). Further evidence for the validity of assessment center dimensions: A meta-analysis of the incremental criterion-related validity of dimension ratings. Journal of Applied Psychology, 93, 1042–1052. https://doi.org/10.1037/0021-9010.93.5.1042
Morgan, D.L. (2013). Focus group as qualitative research: Planning and research design for focus groups. London: Sage.
Morgan, D.L., & Bottorff, J.L. (2010). Advancing our craft: Focus group methods and practice. Qualitative Health Research, 20(5), 579–581. https://doi.org/10.1177/1049732310364625
Moerdyk, A. (2009). The principles and practice of psychological assessment. Pretoria: Van Schaik Publishers.
Muchinsky, P.M. (2011). Psychology applied to work. (10th edn.). Summerfield, NC: Hypergraphic Press.
Muchinsky, P.M., Kriek, H.J., & Schreuder, D. (Eds.). (2005). Personnel psychology. (3rd edn.). Cape Town: Oxford University Press.
Mulder, G. & Taylor, N. (2015). Validity of assessment centres (ACs) as a selection development measure. Nari: JvR Psychometrics.
Muleya, V., Fourie, L., & Schlebusch, S. (2013). Challenges in assessment centres within the South African context. Paper presented at the 2013 International Conference on Assessment Centre Methods and Assessment Centre Study Group Conference, Stellenbosch.
Müller, K.P. & Roodt, G. (2013). Content validation: The forgotten step-child or a crucial step in AC validation? SA Journal of Industrial Psychology, 39(1), 1153. https://doi.org/10.4102/sajip.v39i1.1153
Myers, M.D. (2009). Qualitative research in business and management. Los Angeles, CA: Sage.
Myers, M.D. (2013). Qualitative research in business and management. (2nd edn.). Los Angeles, CA: Sage.
National Planning Commission. (2013). National Development Plan Vision 2030. Cape Town. Retrieved July 16, 2014, from http://hdl.handle.net/123456789/941
Neuendorf, K.A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage.
Neuman, W.L. (2000). Social research methods: Qualitative and quantitative approaches. Needham Heights, MA: Allyn & Bacon.
Peters, D.A. (1993). Improving quality requires consumer input: Using focus groups. Journal of Nursing Care Quality, 7(2), 34–41. https://doi.org/10.1097/00001786-199301000-00006
Pietersen, H.J. (1992). Die waarde van die takseersentrum. South African Journal of Industrial Psychology, 18(1), 20–22. https://doi.org/10.4102/sajip.v18i1.538
Pojman, L.P. (1994). Ethics: Discovering right and wrong. (2nd edn.). Belmont, CA: Wadsworth Publishing Company.
Pope, K.S. & Vasquez, M.J.T. (1998). Ethics in psychotherapy and counselling: A practical guide. (2nd edn.). San Francisco, CA: Jossey–Bass.
Promotion of Access to Information Act, No 2 of 2007. Pretoria: Republic of South Africa.
Protection of Personal Information Act, No. 4 of 2013. Pretoria: Republic of South Africa.
Reese, W.L. (1980). Dictionary of philosophy and religion. Atlantic Highlands, NJ: Humanities Press.
Roch, S.G. (2006). Discussion and consensus in rater groups: Implications for behavioural and rating accuracy. Human Performance, 19(2), 91–115. https://doi.org/10.1207/s15327043hup1902_1
Roodt, G., de Kock, F., & Schlebusch, S. (2013). Criteria, practices and ethical pitfalls when selecting assessment instruments for your assessment centre. Paper presented at the 2013 International Conference on Assessment Centre Methods and Assessment Centre Study Group Conference, Stellenbosch. Retrieved December 05, 2016, from http://www.acsg.co.za/archives/2013-icacm-and-acsg-conference/RoodtDeKockSchlebuschv2-ACSG-2013.pdf
Rossouw, D., & Van Vuuren, L.J. (2014). Business ethics. (5th edn.). Cape Town, South Africa: Oxford University Press.
Schlebusch, S., & Roodt, G. (2008). Assessment centres: Unlocking potential for growth. Randburg, South Africa: Knowres Publishing.
Schollaert, E., & Lievens, F. (2011). The use of role-player prompts in assessment center exercises. International Journal of Selection and Assessment, 19(2), 190–197. https://doi.org/10.1111/j.1468-2389.2011.00546.x
Shah, S.K., & Corley, K.G. (2006). Building better theory by bridging the quantitative qualitative divide. Journal of Management Studies, 43(8), 1821–1835. https://doi.org/10.1111/j.1467-6486.2006.00662.x
Shenton, A.K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22(2), 63–75. https://doi.org/10.3233/EFI-2004-22201
Simonenko, S., Thornton, G.C., III, Gibbons, A.M., & Kravtcova, A. (2013). Personality correlates of assessment center competency ratings: Evidence from Russia. International Journal of Selection and Assessment, 27(4), 407–418. https://doi.org/10.1111/ijsa.12050
Society for Industrial & Organisational Psychology in South Africa. (2005). Guidelines for the validation and use of personnel selection procedures for the workplace. Pretoria: SIOPSA.
Spence Laschinger, H.K., Wong, C.A., & Greco, P. (2006). The impact of staff nurse empowerment on person-job-fit and work engagement/burnout. Nursing Administration Quarterly, 30(4), 358. https://doi.org/10.1097/00006216-200610000-00008
Test Publishers v. Republic of South Africa and others. (2017). Northern Gauteng Division of the High Court of South Africa, 89564/14[2017]. Retrieved June 21, 2017, from www.saflii.org/za/cases/ZAGPPHC/2017/144.html
Thornton, C.G., III. (1992). Assessment centres in human resource management. Reading, MA: Addison-Wesley.
Thornton, G.C., III., & Gibbons, A.M. (2009). Validity of assessment centres for personnel selection. Human Resource Management Review, 93, 169–187. https://doi.org/10.1016/j.hrmr.2009.02.002
Thornton, G.C., III., & Rupp, D.E. (2006). Assessment centres in human resource management: Strategies for prediction, diagnosis, and development. Mahwah, NJ: Erlbaum.
Thornton, G.C., III., Rupp, D.E., & Hoffman, B.J. (2015). Assessment center perspectives for talent management strategies. (2nd edn.). New York: Routledge.
Van Vuuren, L.J., & Schlebusch, S. (2013). Ethical assessment centres: Theory, principles and cases. Paper presented at the 2013 ICACM (International Conference on Assessment Centre Methods) and ACSG (Assessment Centre Study Group) Conference, Stellenbosch, South Africa. Retrieved December 05, 2016, from http://www.ACsg.co.za/archives/2013-icacm-and-ACsgconference/VVuurenSchlebsuch_EthicalAssessmentCcentres2013.pdf
Wheatley, M.J. (2009). Leadership and the new science: Discovering order in a chaotic world. San Francisco, CA: Berrett-Koehler Publishers.
Wiley, C. (1995). The ABC’s of business ethics: Definition, philosophies and implementation. Industrial Management, 37(1), 22–27.
Willig, C. (2001). Introducing qualitative research in psychology: Adventures in theory and method. Maidenhead: Open University Press.
Appendix 1
TABLE 1-A1: Biographical information of survey participants (n = 96) |
Appendix 2
TABLE 1-A2: Biographical information of focus group participants (n = 16) |
Appendix 3
TABLE 1-A3: Recommendations to address ethical challenges in ACs in South Africa (listed in alphabetical order according to an overarching theme) |
|