About the Author(s)


Deon Meiring Email
Department Human Resource Management, Economic Management Science, University of Pretoria, South Africa

Anne Buckett
Department Human Resource Management, Economic Management Science, University of Pretoria, South Africa

Precision HR, Johannesburg, South Africa

Citation


Meiring, D., & Buckett, A. (2016). Best practice guidelines for the use of the assessment centre method in South Africa (5th edition). SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 42(1), a1298. http://dx.doi.org/10.4102/sajip.v42i1.1298

Original Research

Best practice guidelines for the use of the assessment centre method in South Africa (5th edition)

Deon Meiring, Anne Buckett

Received: 17 Aug. 2015; Accepted: 09 Nov. 2015; Published: 17 May 2016

Copyright: © 2016. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Orientation: Assessment Centres (ACs) have a long and successful track record in South Africa when used for selection and development purposes. The popularity of the approach is mainly attributable to the ACs’ numerous strengths, which include the perceived fairness, practical utility and strong associations with on-the-job performance. To maintain the integrity of the AC, it is important for practitioners and decision makers to apply the method in a consistent and standardised manner.

Research purpose: The purpose of the report is to provide practitioners and decision makers with practical guidelines and concrete procedures when using ACs as part of the organisation’s human resource management strategy.

Motivation for the study: The past decade has seen significant advances in the science and practice of ACs. Now in its fifth edition, the revised Guidelines seek to provide important information to practitioners and decision makers on a number of factors when used in conjunction with the AC method, namely, technology, validation, legislation, ethics and culture.

Main findings: The Guidelines provide specific suggestions and recommendations for using technology as part of the manner of delivery. Issues of culture, diversity and representation are also discussed. New features of the Guidelines include more concrete guidance on how to conduct a validation study as well as unpacking several ethical dilemmas that practitioners may encounter. Of critical importance is a position statement on the use of ACs in relation to new legislation (Employment Equity Amendment Act, Section 8, clause d) pertaining to psychometric testing.

Practical/managerial implications: The Guidelines serve as a benchmark of best practice for practitioners and decision makers who intend on, or are currently, using ACs in their organisations.

Contribution/value-add: In the absence of formal standards governing the use of ACs in South Africa, the Guidelines provide an important step towards establishing standardisation in the use of the AC method. The Guidelines provide (1) guidance to industrial and organisational psychologists, organisational consultants, human resource management specialists, generalists and the Department of Labour, and others designing and conducting ACs; (2) information to managers deciding whether to introduce AC methods; (3) instructions to assessors taking part in the AC; (4) guidance on the use of technology and navigating diverse cultural contexts; and (5) a reference for professionals on best practice considerations in the use of the AC method.

Purpose

The fifth edition of Assessment Centre Guidelines for South Africa was compiled by a Taskforce under the auspices of the Assessment Centre Study Group (ACSG) of South Africa in 2015 (see Appendix 1). The purpose of this document is to establish professional guidelines and communicate ethical considerations for users of Assessment Centres (ACs) in South Africa. The revised fifth edition Guidelines represent an update of the 2007 fourth edition Guidelines and take the latest international developments, AC design, implementation and evaluation of ACs in the workplace in South Africa into consideration.

These Guidelines provide (1) guidance to industrial and organisational psychologists, organisational consultants, human resource (HR) management specialists, generalists and the Department of Labour, and others designing and conducting ACs; (2) information to managers deciding whether to introduce AC methods; (3) instructions to assessors taking part in the AC; (4) guidance on the use of technology and navigating diverse cultural contexts; and (5) a reference for professionals on best practice considerations in the use of the AC method.

History of the guidelines

Since its establishment in 1981, the ACSG has played a key role in disseminating information about ACs through its annual conference in Stellenbosch as well as through newsletters and networking activities. The mid-1980s saw an increase in the use of ACs for various purposes (e.g. selection and development) in organisations. As more HR practitioners and consultants made use of the AC method, some with limited experience, the ACSG started to play a more prominent role with regard to the professional and ethical aspects related to the use of ACs in South Africa.

First edition (1987)

During the 1980s, the use of ACs in South Africa increased at a pace comparable to international trends. The ACSG was consequently established in 1981 under the auspices of the Institute for Personnel Management (IPM) to guide the use of ACs by organisations [e.g. Old Mutual, Transport Services [South African Railway Services] Stellenbosch Farmers Winery (now part of Distell), Department of Post and Telecommunication Services, Nasionale Pers (Naspers), the South African Army and the South African Police] in South Africa. By 1987, the use of ACs was widespread and the ACSG believed that there was cause for reflection at this point. This was because of (1) the lack of appropriate legislation to regulate the use of personnel assessment techniques and (2) the emergence of consultants and HR practitioners who did not possess the required exposure to, and experience of, the AC methodology needed to effectively implement ACs in organisations.

These issues were considered serious and, at an executive meeting held in June 1987, it was decided to adapt the 1979 International Guidelines to conform to South African legal requirements. It was furthermore decided that these guidelines would be published in the IPM Journal and that the ACSG would assume the role of monitoring AC applications in South Africa. The role of the ACSG was described as follows: ‘In view of the concerns about the implementation of ACs in the introduction of this article, it becomes clear that the ACSG, and more specifically the executive, will have to play a more watchful role. It does not want to play a policing role and neither does it have the resources or authority to do so. It will, however, in future have to be very much alert in order to continuously monitor activities in the field’ (Spangenberg, 1987, p. 18).

Second edition (1991)

New International Guidelines were endorsed by delegates at the 17th International Congress on the AC Method in May 1989 in Pittsburgh, Pennsylvania, USA, and this prompted the South African AC fraternity to also revise their Guidelines. These revised Guidelines were presented by Hermann Spangenberg, chairperson of the project, at the 11th annual ACSG conference (Stellenbosch, March 1991) and copies were circulated to delegates. The Guidelines were discussed and it was decided that a decision would be taken concerning the Guidelines during the Annual General Meeting on the second day of the conference. This would allow delegates time to think about possible implications of accepting the Guidelines. Proceedings of the sessions were summarised as follows: (1) the 1989 International Guidelines were endorsed unanimously by delegates; (2) the chairperson was asked to edit the 1989 International Guidelines (for the purpose of clarity and brevity) and to circulate the document to members of the executive for approval and (3) to submit edited copies to the IPM Journal for publication and to the secretary of the ACSG for circulation to members. The role of the executive with regard to the application of the Guidelines was discussed. Of special interest was the proposed advisory role that executive committee members could play during the construction of an AC. However, in order to safeguard committee members from possible litigation, it was decided that committee members could not officially be called upon to approve procedures or steps in the AC construction process. However, committee members, who were usually experienced AC practitioners, could be consulted informally. This had, in fact, already been common practice in the past. Although the endorsed International Guidelines would have no formal legal status, they could play an important role in litigation in as much as they would be considered to represent the opinion of experts in the AC field.

Third edition (1999)

During the 1998 ASCG conference in Stellenbosch, a decision was taken to revise the 1991 Guidelines in order to better align them with the legal and social developments taking place in South Africa at the time. In addition, the Guidelines needed to meet the requirements of new labour legislation as promulgated in the Employment Equity Act (Act No. 55 of 1998) and also needed to meet validity procedures. In their strategy for revising the Guidelines, the taskforce committee adopted the following criteria: (1) relevant stakeholders were consulted [e.g. ACSG members, representatives from the Department of Labour and the South African Qualifications Authority and (2) a draft copy of the guidelines was distributed at two sessions of the Psychological Assessment Initiative, an interest group of the Society for Industrial and Organisational Psychology of South Africa, for members to give comments. The following step-by-step process was followed: (1) inputs from stakeholders were obtained, (2) a taskforce consisting of members of the ACSG executive committee integrated information and developed a draft proposal and (3) the final proposal was submitted for endorsement at the annual ACSG conference in March 1999 (Stellenbosch).

Fourth edition (2007)

During the 2006 annual ACSG Conference in Stellenbosch, it was decided to revise the 1999 Guidelines to ensure alignment with the 2000 International Guidelines and to incorporate the 2006 Professional Guidelines for Global ACs. Two of the key features of the 2007 Guidelines were the inclusion of Development Assessment Centres (DACs) as part of the guidelines and an emphasis on the cross-cultural application of ACs and DACs in South Africa. The following steps were followed: (1) various stakeholders, especially in the consulting domain of ACs, were consulted; (2) the latest literature on AC Guidelines was collected and studied; and (3) a taskforce consisting of members of the ASCG facilitated a work session where a broad structure for the 2007 guidelines was proposed. A draft version of the 2007 Guidelines was introduced at the 27th annual ACSG Conference (March 2007, Stellenbosch). The revised 2007 Guidelines were published and distributed at the 28th annual ACSG Conference (March 2008, Stellenbosch). The fourth edition Guidelines were also published on the ACSG website and made available for users at no charge.

Fifth edition (2015)

The current edition was initiated partly to align these Guidelines with the sixth edition Guidelines and Ethical Considerations for Assessment Centre Operations (International Taskforce on Assessment Centre Guidelines, 2015). In addition, the revised Guidelines seek to take into account the numerous scientific advancements in the AC domain in South Africa. Since the publication of the fourth edition Guidelines (which were due for an update), many significant changes and development in AC field have taken place. South African AC practitioners had to take note of new research insights, in particular AC designs models and new construct validity practices. Furthermore, it was necessary to make specific allowance in the fifth edition Guidelines for the impact of technology and cultural considerations on AC practice in South Africa. Also the ACSG had to clarify its position on AC practice and the Employment Equity Act, Section 8; especially with regards to clause d. Under the chairmanship of Deon Meiring (University of Pretoria) and co-chair, Anne Buckett (Precision HR), a taskforce was compiled consisting of former members of the revision process (fourth edition), academics, consultants, and emerging AC practitioners. Following a structured project management approach, the taskforce members were allocated specific sections of the fourth edition Guidelines to review and amend. These suggestions were then collated and the taskforce held a half-day workshop to review all comments and suggestions and arrive at a majority position on controversial issues. Specific attention was given to technology, legislation, ethics, cultural considerations and the technicalities/practicalities of AC design and validation. In particular, the taskforce debated and arrived at a position statement on ACs concerning alterations to the Employment Equity Amendment Act (Act No. 47 of 2013, Section 8, clause d) pertaining to the classification of ACs in accordance with the Act. The taskforce further advocated for a stronger alignment to the International Guidelines in terms of structure and content, albeit customised to the South African context, to enhance consistency and standardisation. The revised South African AC Guidelines were circulated to ACSG members and delegates before the 35th annual ACSG conference was held in Somerset West, 23–27 March 2015 at The Lord Charles Hotel.

Defining the AC method

An AC is a standardised assessment process where one or more participants complete multiple behavioural simulation exercises and are observed by multiple assessors who are trained to observe and evaluate each participant against a number of predetermined, job-related behavioural constructs known as competencies (Schlebusch & Roodt, 2008). Assessment scores are then determined by combining data for each participant by means of either consensus meeting between assessors or statistical integration (Ballantyne & Povah, 2004). ACs are most commonly used for the selection, diagnosis and development of managers but can be effectively adapted for non-managerial positions. It is important that ACs are developed, implemented and validated to ascertain alignment to the intended purpose of the AC and the broader strategic talent management goals of the organisation.

All AC programmes must contain 10 essential elements:

Job-related behavioural competencies

The starting point of an AC is an analysis of the job and/or managerial context to determine the critical competencies that adequately differentiate between effective and less effective performance of job incumbents. This may be considered the success profile of the job and/or managerial context. Competencies1 also form the foundation of the AC.2 South African labour law (Employment Equity Act, Act No. 55 of 1998) states that a job applicant may only be assessed on the inherent requirements of the job. Job analysis therefore provides crucial information about the competencies, attributes, characteristics, qualities, skills, abilities, knowledge and tasks that are required to be successful in the job. Job analysis can be carried out in a number of ways, for example, by using structured questionnaires that gather information about work and worker attributes, interviewing subject matter experts, completing task lists, capturing critical incidents, using the repertory grid technique and by referencing job sites or reviewing current documentation. The type and extent of analysis depends on the purpose of the AC, the context in which the competencies manifest themselves, the level of difficulty of common problems encountered in the job and the suitability of existing information about the job. Once the information has been analysed, the critical job competencies are selected. These competencies are viewed as behavioural constructs that should be anchored firmly in the requirements of the job or managerial context. The competencies therefore need to be clearly defined, specific, unambiguous, and expressed in terms of behaviour that can be observed on the job and in the selected behavioural simulation exercises designed to activate the competencies that are used in the AC. An appropriate number of competencies to be measured in the AC for selection purposes could, for example, be six to eight, otherwise effective measurement of the competencies becomes extremely difficult and AC ratings become diluted. However, for DACs, the appropriate number could be higher (Rupp, Snyder, Gibbons, & Thornton, 2006).

Relationships between competencies and AC techniques

The outcome of the job analysis is a list of critical competencies important for effective job performance. The logical next step is to map the competencies to the various assessment techniques to be administered during the AC. This is known as the ‘assessment matrix’. The assessment matrix provides the overview of the competencies to be assessed in relation to the chosen assessment techniques (Schlebusch & Roodt, 2008). Research demonstrates that assessing fewer competencies leads to better prediction (Bowler & Woehr, 2006; Gaugler & Thornton, 1989; Krause, Rossberger, Dowdeswell, Venter, & Joubert, 2011; Lievens & Conway, 2001). Although there is no consensus in the literature around the specific number of competencies to measure, as a guideline, it is recommended that ideally between four and six competencies should be measured for each behavioural simulation exercise.

Multiple assessment techniques

One of the distinguishing characteristics of the AC is the use of multiple assessment components. This is usually in the form of multiple behavioural simulation exercises that each participant needs to complete and the use of multiple assessors to observe the behaviour of participants. ACs can consist entirely of behavioural simulation exercises or can combine behavioural simulation exercises with other measures such as psychometric tests, competency-based interviews, multi-rater feedback or situational judgement tests. It is recommended that various data collection points are developed during the evaluation of behaviour to assist in the validation of the AC. The assessment techniques are developed or chosen to elicit a variety of behaviours and other information relevant to the selected competencies. The assessment techniques should be pretested to ensure that the techniques provide reliable, objective and relevant behavioural information for the intended organisation and job. Pretesting may entail conducting a pilot AC with participants similar to the target participant group, review by subject matter experts to ensure the veracity of the behavioural measures and/or evidence concerning the use of these techniques for similar jobs in similar organisations.

Simulation exercises

An AC must contain multiple opportunities for participants to display job-related behaviour and for that behaviour to be observed by the assessors. A behavioural simulation exercise is an assessment technique that samples behaviour related to a fictitious job scenario. Although behavioural simulation exercises are not intended to replicate the job of interest, they are nonetheless designed to closely simulate aspects of the work environment in a contextually grounded manner in order to increase fidelity. Simulation exercises require participants to respond behaviourally to situational stimuli and can be administered in a variety of formats such as paper, video, audio, computers, face-to-face, telephonic or the Internet. Examples of behavioural simulation exercises include in-basket exercises, cooperative or competitive group exercises, case studies, role play exercises, presentations and fact-finding exercises (see Thornton, Rupp, & Hoffman, 2015, chapter 4, for a full description of different behavioural simulations).

The design of behavioural simulation exercises is a crucial step in the AC process. These exercises must be constructed in such a way that participants can demonstrate the requisite construct-related behaviour. Assessors also need to be able to detect and observe the behaviour during the administration of the behavioural simulation exercise(s). To this end, it is recommended that behavioural cues are built into the simulation exercises during exercise development. Carefully designed behavioural simulation exercises are thus used as vehicles for eliciting behaviour. This allows participants adequate opportunities to display behaviours linked to the selected competencies. Examples of behavioural cues include prompts provided by role players or stimuli provided in the setting of the behavioural simulation exercise. Trait Activation Theory offers a useful system for developers of behavioural simulation exercises (Lievens, Schollaert, & Keen, 2014; Oliver, Hausdorf, Lievens, & Conlon, 2014; Tett & Burnett, 2003).

Only simulation exercises that require the participant to overtly display selected behaviours can be classified as behavioural simulation exercises. In other words, the participant must construct a response rather than select a response from a predetermined list (e.g. multiple-choice response formats). Furthermore, to gain a fuller understanding of the extent of a participant’s competency performance, a minimum of two behavioural simulation exercises are required. As a guideline, less complex jobs could use as few as two simulation exercises if clearly justified by job analysis. Alternatively, if a single all-inclusive assessment technique is used, then more than two distinct job-related segments need to be developed. AC designers/developers must also ensure that the content of behavioural simulation exercises reflects inherent requirements of the job and does not unfairly bias different groups of participants (e.g. based on race, age or gender).

Assessors

Another characteristic of the AC is the use of multiple assessors to observe and evaluate the participants. The selection of assessors in South Africa should strive for diversity, insofar as this is practically possible, in terms of demographics (e.g. race, age, gender) and experience. To improve objectivity, each assessor should observe each participant in at least one behavioural simulation exercise. The ratio of assessors to participants is a function of several variables, including the type of exercises used, the competencies being measured, the roles of the assessors, the type of data integration conducted, the amount of training completed by the assessors, the experience of the assessors, the use of supporting documentation provided to assessors and the purpose of the AC. In order to reduce cognitive load, the assessor to participant ratio should be minimised as much as possible. Furthermore, to reduce bias, a line manager should not observe a direct subordinate and an assessor should not observe a person who they know.

Assessor training

Before taking part in the AC, assessors are required to undergo training. There are two forms of assessor training, namely, behavioural training and frame-of-reference training. Behavioural training consists of acquiring familiarity and experience with the AC process, whereby assessors observe, record, classify and evaluate the behaviour of participants who complete a variety of behavioural simulation exercises. This technique is referred to as ORCE and training in this technique teaches assessors to apply the ORCE process sequentially so that all evidence is first observed and recorded before it is classified and evaluated (International Taskforce on Assessment Centre Guidelines, 2015). To increase the practical value of the training, assessors also complete the behavioural simulation exercises, ideally as part of a simulated AC. They also have the opportunity to become familiar with the selected competencies by scoring the behavioural simulation exercises. Frame-of-reference training also typically includes a practical component related to the behavioural simulation exercises. However, in this form of training, guidance is also provided on making ratings and calibrating scores in accordance with specific behavioural indicators linked to the selected competencies. Information about the host organisation and job are provided as well as information about the purpose of the AC. Refer to Appendix 2 for details on the elements that should to be covered in assessor training.

Observing and recording behaviour

Assessors must use a systematic procedure to enable evidence of observed behaviour to be accurately captured during various interactions. This might entail capturing participant comments verbatim and taking detailed notes or using behavioural checklists. Observations may occur post hoc by accessing audio and/or video recordings taken as participants complete behavioural simulation exercises.

Classifying and evaluating behaviours

The behaviours that are observed and recorded by the assessors for every participant completing the behavioural simulation exercises must be classified under each of the selected competencies. This is usually carried out by listing examples of observed behaviour for a participant on a structured rating form designed specifically for the behavioural simulation exercise. Structured rating forms provide a list of behavioural indicators related to each of the selected competencies to be measured in the behavioural simulation exercise. Structured rating forms may be expressed as Likert-type rating scales or behaviourally anchored rating scales (BARS). There are two common ways to score AC candidate data. Firstly, an assessor can evaluate a candidate’s performance in a specific behavioural simulation exercise for all the designated competencies directly after the behavioural simulation exercise is completed. This is known as within-exercise scoring. Secondly, an assessor can evaluate a candidate’s performance for a specific competency after the completion of all behavioural simulation exercises. This is known as across-exercise scoring.

Data integration

Assessors must rate each participant’s performance across the various behavioural simulation exercises independently against the selected competencies before the data integration meeting or before statistical integration takes place. The process must be carried out in accordance with professionally accepted standards. If an integration discussion is used, then assessors should report on information gathered and behaviours observed from the various assessment techniques. Assessors should refrain from sharing information that is irrelevant to the purpose of the AC. Assessor evaluations of reported participant behaviour must be supported by tangible evidence demonstrating reliable and valid aggregations of observations, regardless of the applied method of integration. It is important to consider the participant’s performance across a range of situations. However, the level of aggregation of AC ratings is usually dependent on the purpose of the AC. For example, in the case of selection, a broader performance category such as the overall assessment rating (OAR) can be used. In the case of development, overall exercises ratings may be used in conjunction with overall competency ratings. Other considerations during data integration could pertain to weighting competencies as part of a selection strategy, drawing additional information from other measures that form part of the AC and the type of feedback that is provided, for example, feedback on competency performance within an exercise or exercise-specific feedback.

Standardisation

The procedures for AC administration must be controlled so that all participants have the same opportunity to demonstrate behaviour related to the designated competencies. This relates to the instructions for behavioural simulation exercises, time limits, exercise materials, conditions (e.g. the facilities used), role player behaviour, the number of participants in group exercises, questions asked by assessors during presentations, the sequence of exercise administration and the scoring procedures. Standardisation is particularly important for ACs used for selection purposes. This element is also vital for validation regardless of the purpose of the AC. However, exceptions may be allowed to, for example, accommodate participants with a disability.

Non-AC activities

It is important to distinguish between the AC as a method for selection and development purposes and the incorporation of elements of the AC method as part of a selection and development process. In order to avoid confusion, activities that do not conform to the basic requirements of an AC as described in these guidelines are listed below:

  1. Psychometric tests as the only measure, either completed on paper or online, which require participants to respond to a series of statements that measure aspects of personality, emotional intelligence or cognitive ability or which require them to make situational judgements.
  2. Use of a single behavioural simulation exercise (e.g. an in-basket exercise) as the primary basis for assessment, even when combined with several psychometric tests.
  3. Using one assessor to observe and evaluate the same (and/or multiple) participant(s) across multiple behavioural simulation exercises.
  4. Using several behavioural simulation exercises and assessors but not undertaking data integration.
  5. Assessment procedures that require no obvious, open and evident behavioural responses from participants: for example, multiple-choice in-basket exercises and situational judgement tests, competency-based interviews and written competency assessments.
  6. Panel interviews or a sequence of interviews as the only technique in the assessment process.
  7. A physical location referred to as an ‘Assessment Centre’ where testing/assessment takes place.
  8. Computerised, automated assessment platforms that do not use open-ended response formats (e.g. multiple choice) and/or do not include assessor observation and evaluation as part of the process.

ACs for different purposes

ACs can be used for a variety of purposes, namely,

  1. To predict performance, for example, selection, promotion and succession planning. This is the traditional purpose of an AC.
  2. To diagnose areas of strength and development, for example, for the purpose of drawing up unique development plans or to identify potential. This application is termed a diagnostic AC.
  3. For development, for example, as part of a training intervention or to develop designated competencies. This application is referred to as a DAC. DACs are designed to assess and develop participants across a range of selected competencies. Feedback is provided at multiple points in the process and the DAC allows multiple opportunities for participants to practice and improve their performance in the competencies, usually over the course of a few days. Assessors may also act as facilitators and/or coaches during the DAC. DACs are designed to be longer than traditional ACs. In addition, the competencies must be able to be developed within the DAC programme duration.

AC policy

ACs form part of the organisation’s talent management/HR policy. Before the introduction of an AC, the organisation should prepare and approve a policy document. The policy document provides an outline of the steps taken to develop, implement and evaluate the AC. The following items are generally included in the document:

  1. Objective – The purpose of the AC must be specified, for example, selection, diagnosis and/or development.
  2. Review and updates – The AC should be reviewed every 5 years (or fewer, depending on the extent of change in the organisation and job context) to ensure relevance.
  3. Participants – The population to be assessed should be specified as well as the method for selecting participants. The process of notifying participants should be described. It should be clear whether participation in the AC is compulsory or voluntary. The rights of participants, consequences of non-participation and alternatives to assessment via the AC should be outlined. Competency frameworks and different behavioural exercises used in AC must also be communicated.
  4. Re-assessment – Conditions for re-assessment must be stated. As a guideline, AC results remain valid for 12–18 months. It is therefore not advisable to re-assess participants using the same AC during that time period. Parallel ACs can be used, provided that the AC is validated (i.e. statistically/scientifically proven to be parallel).
  5. Use of data – The process for collecting, using and storing AC data must be outlined. It may also be necessary to further distinguish between different delivery platforms such as when the AC is administered on paper, electronically or over the Internet. It is further necessary to specify who has access to the AC data, how long data will be stored, the process for confidentially disposing of data (when required), how data will be used for research and feedback procedures to organisational decision makers (e.g. line managers) and participants.
  6. Feedback and reporting – Feedback and reporting requirements must be outlined for decision makers and participants, that is, how, when and what kind of feedback and report.
  7. Assessors – The composition of the assessor pool (e.g. race and gender), requisite qualifications and experience, method for selecting assessors and assessor training and certification should be described. Where relevant, assessors may need to meet the ethical and professional regulations set out by the Health Professions Council of South Africa (HPCSA) in the Health Professions Act (Act No. 56 of 1974). For example, when specific psychometric tests are included as part of the AC.
  8. Qualifications of the AC designer/developer – The professional qualifications, experience and related training of the AC designer/developer must be specified. Refer to Section 7 (key AC role and training requirements) for further details.
  9. Validation – There should be a statement specifying the validation model being used. If a content-oriented validation strategy is used, documentation of the relationship of the job content to the competencies and behavioural simulation exercises should be presented along with evidence of reliability in observation and rating of behaviour. If evidence is being taken from prior validation research, which may have been summarised in meta-analyses, the organisation must document that the current job and AC are comparable to the jobs and ACs studied elsewhere. If local validation has been carried out, full documentation of the study should be provided. If validation studies are ongoing, there should be a time schedule indicating when a validation report will be available.
  10. Legislative requirements – In South Africa, legislation regulates the use of assessments in the workplace. To this end, the policy document must take into account the following laws: Employment Equity Act (No. 55. of 1998), Employment Equity Amendment Act (No. 47 of 2013), Promotion of Access to Information Act (Act No 2. of 2000) and Protection of Personal Information Act (Act No. 4 of 2013).
  11. Use of technology – A list of technical requirements for administering the AC programme as well as operational requirements related to the use of technology must be specified. Refer to the International Taskforce on Assessment Center Guidelines (2015), section X.

Key AC roles and training requirements

ACs use assessors and other support staff in various capacities during design, development, implementation and validation. To ensure a consistent and standardised approach to delivering the AC across different participant groups at different times, all people associated with the AC must be appropriately trained:

  1. AC designer/developer – This person is responsible for designing and developing the AC. This person is responsible for ensuring that a logical and systematic process is followed that meets local and international standards for ACs. Adopting a structured process assists in validation research and ensures that the AC is designed in accordance with the intended purpose. The AC designer/developer must also ensure that all AC materials, structured rating forms, assessor guidelines and manuals have been designed to facilitate the consistent implementation of the AC over time. Although it is difficult to specify a minimum academic qualification for the AC designer/developer, the complexity involved in AC design dictates that this person has a proven track record in AC design that meets both local and international AC standards and is a seasoned behavioural analyst. Ideally, this person would have completed a number of different ACs as an assessor, in addition to being coached/mentored by a seasoned AC designer/developer. An additional requirement is that the AC designer/developer needs to keep up to date with the latest developments, trends and research in the AC field. The AC designer/developer must be knowledgeable about the host organisation and country, especially in relation to cultural, legislative, organisational and other relevant contextual factors (Schlebusch & Roodt, 2008).
  2. AC administrator – This person is responsible for supervising and managing the overall AC operation at the highest level. This person may also be the designer/developer of the AC and/or behavioural simulation exercises, may implement and maintain policy documents and may be responsible for conducting research in terms of the validation and evaluation of the AC. The AC administrator is also responsible for managing the assessors and their training, working closely with the host organisation and other decision makers associated with the outcomes of the AC, maintaining data integrity and confidentiality, risk management and quality control. This person should be an experienced behavioural analyst with commensurate experience and qualifications. Ideally, this person would have completed a number of different ACs as an assessor. In the event that the AC administrator is also the AC designer/developer, the criteria described in this category would also apply.
  3. AC coordinator – This person plays an administrative role before, during and after the AC. This person reports to the AC administrator. They are in charge of all operational and logistical matters, for example, scheduling participants, booking venues, liaising with venue staff, ensuring that the AC programme runs according to plan, and managing documentation and other associated duties. This person should be trained in the correct procedures and processes for the AC by the AC administrator and should have excellent planning, organising and administrative skills.
  4. Assessor – This person is trained to observe and record participants’ behaviour across different behavioural simulation exercises that form part of the AC. The assessor then classifies and evaluates the captured evidence against the selected competencies by completing the structured rating forms for each simulation exercise. Assessors need to be properly trained before taking part in an AC. Although there is no minimum academic qualification required to become an assessor in South Africa, if the AC includes other psychological tests as part of the assessment process (e.g. psychometric tests), then a minimum qualification at an Honours level with registration as a Psychometrist (Independent Practice) or working under the supervision of an Industrial/Organisational Psychologist is required. When work-related psychological acts are performed in the AC, this forms part of the scope of practice of Industrial/Organisational Psychologists. If psychologists from other disciplines (e.g. clinical, educational, counselling and research) are trained as assessors, then the regulations set out in the Health Professions Act (Act No. 56 of 1974) apply. If a line manager is designated to be an assessor, then this individual should be an experienced manager with a proven track record of people management in the organisation. In addition to attending assessor training, the line manager should work in conjunction with a seasoned assessor. Assessors should be certified as competent by the AC administrator for each unique AC in which they are involved.
  5. Role player – During interactive simulations, role players create opportunities for the participants to demonstrate behaviour linked to selected competencies being measured in the AC. The role can be played, for example, in a face-to-face setting or over the phone. Role players typically portray a character in a fictitious scenario where there is an element of conflict inherent in the situation. They are responsible for ensuring that they do not overplay or underplay a role, thereby taking away an opportunity from the participant to demonstrate the required behaviour. Role players should be thoroughly trained to understand their own role and the character they will portray. They also need to understand the competencies being evaluated, recognise behaviour linked to these competencies and know how to use prompts appropriately to elicit the desired behaviour from the participant. They must have detailed knowledge and understanding of the content of the behavioural simulation exercise and be consistent when they are performing a role in a role play. The training should include theoretical input and practical exercises. Only after sufficient practice can the role player be signed off by the AC administrator as proficient. Best practice recommends that the role player, as far as possible, should not also be the assessor during the role play exercise. If this is unavoidable, for example, during large-scale ACs assessing hundreds of participants within a specific timeframe, then alternative methods for capturing behaviour, for example, video or audio recordings, should be used.

Validation issues

A major selling point and benefit of using the AC is the established body of empirical research illustrating its strength in predicting successful job performance (Arthur, Day, McNelly, & Edens, 2003; Bowler & Woehr, 2006; Gaugler, Rosenthal, Thornton, & Bentson, 1987; Klimoski & Brickner, 1987; Meriac, Hoffman, Woehr, & Fleischer, 2008). In order to generalise large, published meta-analytic validity3 summaries to a local context, it is important to ensure that job, exercises, assessors and participants in the local context are similar to those reported in the meta-analytic study. However, validity generalisation studies of the predictive validity of the OAR do not necessarily establish the validity of the procedure for other purposes, for example, diagnosis of training needs, accurate assessment of level of skill in separate competencies or the developmental influence of participation in an AC. In addition, the majority of these studies are international and it cannot be assumed that the findings will translate in the same manner in South Africa. Nonetheless, international meta-analyses can be used as a benchmark for reference purposes.

Effective scientific evaluation of an AC starts with a clear articulation of the objectives of the AC. This aids in the production of empirical evidence for the validity of the AC in order to determine whether the AC measures what it intends to measure. In evaluating the validity of AC ratings, it is particularly important to document the selection of the competencies measured in the AC. In addition, the relationship of AC exercises (e.g. behavioural simulation exercises and/or psychometric tests used as part of the AC) to the competencies measured should be documented.

Validity is defined as the extent to which a measurement tool or process, such as an AC, yields useful results and indicates to what extent meaningful inferences can be made about AC ratings. Multiple types of validity evidence can be accumulated (e.g. convergent, discriminant, content, criterion-related, face and predictive validity) depending on the questions being asked and the tools or processes being investigated (see Bowler & Woehr, 2006; Gaugler et al., 1987; Lance, Lambert, Gewin, Lievens, & Conway, 2004; Lievens & Conway, 2001). For example, face validity refers to a process or exercise that is constructed to outwardly appear relevant to the context/target job role. In contrast, criterion-related validity is used when an OAR is related to later or concurrent job performance or progress. The more contemporary Unitarian view of validity states that the different types of validity are complementary to each other and work in conjunction to provide meaningful evidence of construct validity. A great deal of international research suggests that AC validity evidence is stable across a wide range of jobs, over long time periods and in various national/cultural contexts (Thornton et al., 2015).

Despite historical arguments in the research literature pertaining to the internal structure of AC ratings (Kuncel & Sackett, 2013; Lance, Woehr, & Meade, 2007; Sackett & Dreher, 1982), recent studies have demonstrated that much of this disagreement stems from the application of differing methodological perspectives (e.g. generalisability theory, confirmatory factor analysis, bi-factor models, hierarchical factor analytic models, item response theory) and research designs (e.g. task-based, dimension-based or mixed model perspectives) (Hoffman, 2012; Putka & Hoffman, 2013). Judgements of the validity of AC ratings should be based on the overall trend of the evidence of various techniques and should ideally not be based on a single approach. In addition, it is important to specify the unit of analysis in validation studies (e.g. behavioural indicator level, final dimension ratings or OARs). The level of aggregation of the data may influence the quality and outcome of validation studies.

Establishing the validity of an AC programme is a complex and technical process. It is important that validation research meets both professional and legal guidelines such as those set out by the Guidelines for the Validation and Use of Assessment Procedures in the Workplace (SIOPSA, 2005). International Guidelines can also be referenced such as the Principles for the Validation and Use of Personnel Selection Procedures (SIOP, 2003) and the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014). Research should be conducted by individuals knowledgeable in the technical and legal issues related to validation procedures. Technical standards and principles for validation should be obtained from reliable and relevant academic sources, such as textbooks on psychological assessment and statistical procedures. Several approaches can be used to gather evidence in support of the validity of the adapted AC.

Those responsible for evaluating and validating ACs should apply the following minimum standards:

  1. Procedures should be implemented in order to ensure the efficient and accurate gathering of data;
  2. Evaluation should be rigorous and scientific and include qualitative content analysis, statistical analysis and participant/assessor attitude surveys and
  3. Empirical validation studies should be conducted when there are adequate resources and large enough samples from which meaningful results can be extracted.

Ideally, evidence from local validation studies may serve as a useful reference and starting point. In situations where classic validation techniques are not feasible, a genuine effort must to be made to collect alternate validation evidence. These attempts should be directed at demonstrating the relevance and validity of the assessment process and outcomes across cultural contexts.

Alternate approaches can include, but are not restricted to, the following:

  1. Collection of content-related validity (i.e. job relatedness) evidence;
  2. Review of job performance evidence (e.g. collected through on-the-job observation, interviews with line managers or performance appraisal data) and
  3. Interviews with relevant stakeholders and participants to gain insight into the validity and effectiveness of the AC.

As a general rule, it is suggested that the minimum requirement for ACs entails the establishment of content validity, face validity and construct validity evidence.

Technology

Technology can give organisations strategic leverage and the move towards a technologically integrated society is inevitable. The benefits of technology can be seen in the use of psychometric tests, which have moved from paper-based administration to electronic and Internet-based administration. It therefore stands to reason that using technology as part of the AC could be an efficient way of lowering costs and enhancing participants’ experience of the process. Technology can be incorporated into the AC in, for example, the delivery of instructions and stimuli in the form of audio or visual cues, presenting the behavioural simulation exercises’ content in electronic format or delivering the AC over the Internet. This is referred to as a technology-enhanced AC (TEAC). This design can be extended to virtual applications, where the AC is delivered in one physical location but the assessors are based in another physical location. This is known as a virtual assessment centre.

However, using technology in the delivery of the AC introduces a number of ethical and legal challenges. Three critical issues are highlighted in the South African context. Firstly, the level of computer literacy of the participant group must be considered. In developing countries such as South Africa, large portions of the population do not have access to technology and are not proficient in the use of technology. Secondly, there are laws governing how participant information is accessed, stored, used and transmitted, both locally and across borders, when using technology. Thirdly, although technology is a useful aid in ACs, it must be remembered that, in developing countries like South Africa, the technological infrastructure can be unstable. This may be because of Internet connectivity and bandwidth challenges or interruptions in the supply of electricity. Organisations using TEACs therefore have to make contingency plans for the instability involved with using technology. There are also additional considerations pertaining to standardisation of the assessment process. Theoretically, each different group of participants being assessed for the same purpose/job/managerial context should receive the same experience in terms of instructions, timing and assessment conditions (e.g. facilities, technology). This might not be possible or practical in developing countries. Another consideration is the relevance of the technological method for measuring the inherent requirements of the job. Therefore, it is important to ensure that incorporating technology into the AC does not detract from the fidelity of the AC process and its intended purpose. Furthermore, the issue of data security must be considered and a risk assessment should be conducted to determine the feasibility of introducing technology into the AC. A process for confirming the participants’ identity should be included if the TEAC is used for selection purposes. Finally, a TEAC must still comply with the essential elements presented in Section 3 in order to be considered an AC.

When technology is used as part of the AC, two additional features must be addressed. Firstly, the AC administrator and/or AC designer/developer must work in close collaboration with software developers to ensure that the AC works in the manner in which it was intended to work. This is to ensure that the TEAC measures the intended behaviour without introducing irrelevant variance into the process, for example, because of the participant’s limitations or technological restrictions. Secondly, participants going through a TEAC must have sufficient opportunities to practice navigating the system and completing example simulation exercises before attending the actual AC. This is to ensure that each participant feels adequately equipped to deal with the technological requirements of the AC so that they can perform optimally. However, it is also important to note that if the daily use of computers is an inherent part of the target role, it will be vital to give participants a realistic simulation and access to typical business software such as emails. Insisting on paper-and-pencil media could introduce distortions in the expected real-world behaviour of participants.

Refer to the International Taskforce on Assessment Centre Guidelines (2015), Section X on Technology for additional details regarding hardware and software requirements as well as Thornton et al., Chapter 9 (2015). The International Test Commission Guidelines (2005) on Computer-based and Internet Delivered Testing also provide a valuable reference for best practice.

Legal compliance

Reference has already been made to the legislative environment in South Africa and additional considerations are presented here in response to changes in labour legislation. Section 8 of the Employment Equity Act (Act No. 55 of 1998) was promulgated to protect employees from unfair discrimination in the workplace, including unfair discrimination relating to the use of psychological tests as part of decision-making processes. Psychological tests measure psychological constructs such as personality traits, cognition or emotional intelligence. ACs therefore differ from psychological tests as they are not tests but methods or procedures that use work-related simulations to assess work-related skills, competencies and behaviours displayed as observable actions.

Section 8 of the Employment Equity Act (Act No. 55 of 1998) was amended by the Employment Equity Amendment Act (No. 47 of 2013), Section 8, clause d, paragraph 4, as follows:

Psychological testing and other similar assessments of an employee are prohibited unless the test or assessment being used:

  1. has been scientifically shown to be valid and reliable;
  2. can be applied fairly to all employees;
  3. is not biased against any employee or group and
  4. has been certified by the HPCSA established by Section 2 of the Health Professions Act (Act No. 56 of 1974), or any other body which may be authorised by law to certify those tests or assessments.

The introduction of clause d has caused widespread confusion in organisations that use ACs because it is unclear whether ACs are classified as psychometric or psychological tests by the clause.

The AC is not a test but rather a method of assessment that focuses on work-related behavioural observation and consists of a number of steps completed in sequential way. As such, it is not a psychological test. ACs use competencies, skills and work-related behavioural constructs, rather than psychological constructs, that emanate from the job analysis and the study of work-related constructs. These work-related tasks and behaviours form the foundation of the AC method. Therefore, the AC method is not considered a purely psychological test.

However, if psychological constructs are measured as part of the AC by means of, for example, personality assessment, then the psychometric test used to measure personality must conform to the amended requirement for certification with the HPCSA. In these instances, the use of these assessments is reserved for psychologists and the HPCSA Scope of Practice criteria apply. According to the Health Professions Act (Act No. 56 of 1974) only registered psychologists are permitted to perform psychological acts which, in relation to evaluation, testing, and assessment, are defined in and elaborated on in Section 37 (2) (a), (b), (c), (d), and (e). These psychological acts relate to psychometric measuring devices, tests, questionnaires, techniques or instruments that assess intellectual or cognitive ability or functioning, aptitude, interest, personality make-up or personality functioning.

As a general rule, when ACs are used as a selection device, this automatically falls under the auspices of the Employment Equity Act (Act No. 55 of 1998) insofar as the AC must be valid and reliable, must be applied fairly to different groups and must measure inherent requirements of the job. To this end, AC designers/developers should rely on job analysis and collated job-related information to create a documented evidence portfolio of the job analysis process to inform validation. This important step enables AC designers/developers to ascertain the elements in the AC that are behavioural or psychological. However, in general, because the AC method measures behaviour by means of work-related behavioural simulation exercises rather than measuring a psychological construct, the additional requirement for certification of behavioural simulation exercises with the HPCSA as specified in the Employment Equity Amendment Act (Act No. 47 of 2013) is not relevant.

Cross-cultural considerations

Factors such as the widespread use of ACs around the world, the cross-cultural application of ACs, the globalisation of business, the need for global executives and the establishment of consultancies offering AC services in numerous countries have raised questions concerning the application of AC practices in diverse settings. Many challenging issues regarding the design and implementation of ACs arise when they are used in cross-cultural situations (Lievens & Thornton, 2005). The emergence of global business in South Africa has contributed to the situation where, for example, an existing AC method is transported from an organisation in the United Kingdom to its counterpart in South Africa, or where a successful AC is imported from the United States or Europe and implemented in an organisation in South Africa.

When designing ACs in a cross-cultural context, two approaches can be considered, namely, the etic and emic approach. The etic approach assumes that (1) there are universal individual attributes relevant to organisational effectiveness; (2) pre-existing assessment techniques can be adapted in different countries; (3) standardisation and validity extensions require that a fixed set of competencies and procedures be used; and (4) the adoption of uniform selection procedures across cultures contributes to a homogeneous organisational culture. The emic approach assumes (1) generic assessment methods will be invalid (e.g. they under-specify unique aspects of criteria performance); (2) each culture must be studied to identify its unique features; (3) the acceptance of various assessment techniques will vary across cultures; and (4) assessors’ training must include an appreciation of contextual information. Before a specific approach can be chosen to focus the design of the AC in a cross-cultural setting, various contextual factors need to be considered. These include, for example, main business language, complexity of the work environment and organisational culture and values.

The International Taskforce on Assessment Centre Guidelines (2015), Section XII, prescribes additional contextual factors to be taken into consideration:

  1. When developing ACs for cross-cultural application, the assumption cannot be made that the purpose, design and content of a pre-existing AC method is transferable across cultures or countries.
  2. To ensure the validity of the AC method for all cultures involved, a determination must be made as to whether an AC method developed for one culture can be applied equivocally to another culture.
  3. A range of contextual factors will help determine whether the AC methods can be adopted uniformly with minimal changes or whether the AC will need to be customised (to varying extents) to suit the needs of the new country.
  4. Evidence in support of the equivalence of the AC method across cultures must be documented.
  5. The AC administrator should assist in updating information regarding local country norms, reliability and/or validity of the AC by providing information to international or local developers, publishers and researchers.
  6. It is important to note that, over time, amendments to local, professional and legal standards are customary. These amendments should be documented and any resulting changes to the AC should be formally noted.

Thus, although there are universal similarities, there are also cultural differences that are specific to a country and organisation. Adapting the AC to the local context is important. Face validity is also critical when linking the AC to the local context.

Additional considerations in cross-cultural settings:

  1. Behavioural simulation exercises should reflect local place names, people’s names, prices and distance indicators.
  2. The impact on performance in the AC because of a participant’s cultural background must be taken into account.
  3. Where practical and possible, the assessor group should be diverse and culturally representative (refer to Section 3, Assessors, for further details).
  4. Assessors should receive diversity awareness during training to moderate and control for bias (refer to Appendix 2, Assessor Training Checklist, for a full list of training components).
  5. Structured rating forms should be designed to properly account for organisational culture and context, for example, the differences that exist in decision making in private versus public sector organisations, where public sector organisations tend to be more bureaucratic regarding decision making than private organisations.
  6. The process of AC design should include multi-cultural representation. For example, a range of organisational stakeholders can be engaged to provide relevant context and cultural evidence in the design of behavioural simulation exercises. Similarly, subject matter experts can be engaged for input regarding the design of behavioural indicators and behaviourally anchored rating scales.

However, these specific cross-cultural considerations must not be at the expense of the measurement of the essential competencies required for the focal job.

Ethics

When using ACs for selection and development, the rights of participants and responsibilities of decision makers need to be confirmed. Reference has been made to some of these considerations throughout this document (e.g. the training and qualification requirements of various stakeholders involved in the AC, purpose of the AC, professional standards, cross-cultural considerations and legal compliance issues). Ethical considerations also need to form part of the AC policy document to specifically accommodate ethical standards and guidelines. Additional ethical considerations are discussed in this section:

  1. Informed consent – Participants need to know the purpose of the AC and how the data will be gathered, scored and used. The manner in which the data will be used and stored and who will have access to that data must be communicated in writing to the participants before the AC. The participants should also have the opportunity to agree that their data may be used for the stated purposes. Participants should also be encouraged to disclose anything that they feel could impact their performance in the AC, for example, specific medications, state of mind or disability.
  2. Participant rights – These include receiving feedback, being informed about what data will be gathered during the AC and how the data will be gathered (e.g. observation, audio or visual recordings), procedure for re-assessment and consequences of non-participation in the AC. Written permission must also be obtained from the participants if the AC data are to be used for different purposes. Use of assessments in South Africa has often been viewed with scepticism. Therefore, to foster transparency and integrity of the AC process, it may be useful to provide participants with the list of competencies to be measured in the AC as well as which specific competencies will be measured in each behavioural simulation exercise.
  3. Re-assessment – Because of participants potentially gaining a measure of ‘test wiseness’ with repeated exposure to the AC method and to allow participants the opportunity to develop their competencies, an appropriate amount of time should pass before re-assessment takes place (refer to Section 6, AC policy). Re-assessment within a short span of time could be disadvantageous to participants.
  4. Dealing with disabilities – Participants with disabilities must be dealt with on a case-by-case basis. The guiding principle should be to refer to inherent requirements of the job. However, organisations are obligated to make reasonable accommodation for people with disabilities. Therefore, as far as possible and where practical, the AC should be tailored to accommodate participants with disabilities. In these instances, current South African legislation should guide the procedure.
  5. Copyright – AC materials are often subject to copyright. Therefore, users of AC materials need to respect these laws by not photocopying materials without permission. This includes plagiarising and copying ideas, such as making superficial changes and passing them off as a new AC.
  6. AC integrity – AC materials should not be randomly distributed and shared with unauthorised people and should be kept confidential to protect the integrity of the AC.
  7. Portraying an AC as delivering results that it was not designed to deliver – AC designers/developers, AC administrators, assessors, AC coordinators and AC practitioners should take care to only portray what the AC is delivering in reality. For example, claims should not be made that an AC determines potential if scientific evidence to that effect does not exist. In addition, claims should not state that an AC can be customised to an organisation’s needs without the ACs’ reliability and validity being affected (although this could be stated once this has been verified empirically). It should also not be claimed that an AC has been validated for a certain population (or that the validation can be generalised to a specific population) if such a statement cannot be defended and/or backed-up with evidence. Finally, claims should not suggest that low reliabilities and validities are acceptable when in reality they are too low to provide any utility for making personnel-related decisions (especially for ACs used for selection purposes).
  8. Using AC results for things other than its intended purpose – ACs are designed for a specific purpose. To this end, AC results gathered for development purposes cannot subsequently be used to make decisions that have consequences for the affected participants, for example, for selection or retrenchments, without collecting appropriate validation evidence supporting the use of the AC for this new purpose.
  9. Using one AC across different contexts – It is not best practice to use the same AC (consisting of a group of behavioural simulation exercises) for different levels of job complexity (e.g. supervisors and middle managers) or for different purposes (e.g. selection versus development). For generic jobs, the same AC can be used, but for different jobs and managerial contexts, the AC must be adapted accordingly. Similarly, the same simulations should not be used for every intervention in the host organisation. Therefore, every effort must be made to tailor the AC for its intended purpose and AC designers/developers should be guided by the information obtained during job analysis. For example, if an AC has been designed specifically for selection, it cannot be transported in its current form to a DAC. Similarly, if an AC has been developed for a particular industry or organisation, it cannot necessarily be transported in its current form to a different industry or organisation without proper review and alignment.
  10. Repeated exposure – Participants should not complete the same AC within a 12-month period. Similarly, if an assessor becomes a participant in the host organisation, they should complete a different AC.
  11. Assessors who know participants – In the interest of fairness and objectivity, assessors and line managers must not observe participants where there is a past, present or future relationship that could cause a conflict of interest.
  12. Compromising professional conduct – AC practitioners should not compromise professional conduct in order to satisfy organisational demands. Examples of such conduct would include modifying AC results to support a decision already taken, using results from a DAC to make a selection decision, conducting ACs differently than indicated in the AC policy document, AC designers/developers not following scientific rigour and taking short cuts in the AC design process.
  13. Social responsibility – ACs can be used beneficially as a development tool to identify development gaps and to highlight appropriate interventions. This is most suitable in ACs that do not have consequences and that do not apply to high-stakes assessment.

Acknowledgements

The ACSG wish to gratefully acknowledge the use of the sixth edition Guidelines and Ethical Considerations for Assessment Centre Operations (International Taskforce on Assessment Centre Guidelines, 2015), other countries’ specific AC Guidelines4 and related documents5 that served as the foundation for the compilation and revision of the fifth edition Best Practice Guidelines for the use of the Assessment Centre Method in South Africa. We also wish to thank Deborah E. Rupp6 (Chair of the International Taskforce on Assessment Centre Guidelines) and George C. Thornton, III, our international advisors, for their valuable inputs and review of the fifth edition Guidelines. Lastly, we wish to thank Filip Lievens for his valuable contributions during the taskforce workshop.

Competing interests

The authors declare that they have no financial or personal relationships which may have inappropriately influenced them in writing this article.

Authors’ contributions

D.M. (University of Pretoria) and A.B. (Precision HR, University of Pretoria) contributed equally to the writing of this article.

References

AERA, APA, & NCME (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education). (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Arthur, W.J., Day, E.A., McNelly, T.L., & Edens, P.S. (2003). A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56, 125–154.

Ballantyne, I., & Povah, N. (2004). Assessment and development centres (2nd edn.). Aldershot, England: Gower.

Bowler, M.C., & Woehr, D.J. (2006). A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. Journal of Applied Psychology, 91(5), 1114–1124. http://dx.doi.org/10.1037/0021-9010.91.5.1114

Employment Equity Act. (1998). Act No. 55 of 1998. Republic of South Africa.

Employment Equity Amendment Act. (2013). Act No. 47 of 2013. Republic of South Africa.

Gaugler, B.B., Rosenthal, D.B., Thornton, G.C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493–511.

Gaugler, B.B., & Thornton, G.C. (1989). Number of assessment center dimensions as a determinant of assessor accuracy. Journal of Applied Psychology, 74(4), 611–618. http://dx.doi.org/10.1037/0021-9010.74.4.611

Health Professions Act. (1974). Act No. 56 of 1974. Republic of South Africa.

Hoffman, B.J. (2012). Exercises, dimensions and the Battle of Lilliput: Evidence for a mixed-model interpretation of assessment center performance. In D.J. Jackson, C.E. Lance, & B.J. Hoffman (Eds.), The psychology of assessment centers (pp. 281–306). New York: Routledge.

International Taskforce on Assessment Center Guidelines. (2015). Guidelines and ethical considerations for assessment center operations. Journal of Management, 41, 1244–1273. http://dx.doi.org/10.1177/0149206314567780

International Test Commission. (2005). ITC guidelines on computer-based and internet delivered testing. Retrieved from http://www.intestcom.org

Klimoski, R., & Brickner, M. (1987). Why do assessment centers work? The puzzle of assessment center validity. Personnel Psychology, 40(2), 243–260.

Krause, D.E., Rossberger, R.J., Dowdeswell, K., Venter, N., & Joubert, T. (2011). Assessment center practices in South Africa. International Journal of Selection and Assessment, 19(3), 262–275. http://dx.doi.org/10.1111/j.1468-2389.2011.00555.x

Kuncel, N.R., & Sackett, P.R. (2013). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99(1), 38–47. http://dx.doi.org/10.1037/a0034147

Lance, C.E., Lambert, T.A., Gewin, A.G., Lievens, F., & Conway, J.M. (2004). Revised estimates of dimension and exercise variance components in assessment center postexercise dimension ratings. Journal of Applied Psychology, 89, 377–385. http://dx.doi.org/10.1037/0021-9010.89.2.377

Lance, C.E., Woehr, D.J., & Meade, A.W. (2007). Case study: A Monte Carlo investigation of assessment center construct validity models. Organizational Research Methods, 10, 430–448. http://dx.doi.org/10.1177/1094428106289395

Lievens, F., & Conway, J.M. (2001). Dimension and exercise variance in assessment center scores: A large-scale evaluation of mulittrait-multimethod studies. Journal of Applied Psychology, 86(6), 1202–1222. http://dx.doi.org/10.1037/0021-9010.86.6.1202

Lievens, F., Schollaert, E., & Keen, G. (2014). The interplay of elicitation and evaluation of trait expressive behavior: Evidence in assessment center exercises. Journal of Applied Psychology, 100(4), 1169–88. http://dx.doi.org/10.1037/apl0000004

Lievens, F., & Thornton, G.C. (2005). Assessment centers: Recent developments in practice and research. In A. Evers, O. Smit-Voskuijl, & N. Anderson (Eds.), Handbook of selection (pp. 243–264). Malden, MA: Blackwell.

Meriac, J.P., Hoffman, B.J., Woehr, D.J., & Fleischer, M.S. (2008). Further evidence for the validity of assessment center dimensions: A meta-analysis of incremental criterion-related validity of dimension ratings. Journal of Applied Psychology, 93(5), 1042–1052. http://dx.doi.org/10.1037/0021-9010.93.5.1042

Oliver, T., Hausdorf, P., Lievens, F., & Conlon, P. (2014). Interpersonal dynamics in assessment center exercises: Effects of role player portrayed disposition. Journal of Management, 1–26, http://dx.doi.org/10.1177/0149206314525207

Promotion of Access to Information Act. (2000). Act No. 2. Republic of South Africa.

Protection of Personal Information Act. (2013). Act No. 4. Republic of South Africa.

Putka, D.J., & Hoffman, B.J. (2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98(1), 114–133. http://dx.doi.org/10.1037/a0030887

Rupp, D.E., Snyder, L.A., Gibbons, A.M., & Thornton, G.C., III. (2006). What should developmental assessment centers be developing? Psychologist-Manager Journal, 9, 75–98.

Sackett, P.R., & Dreher, G.F. (1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, 67(4), 401–410. http://dx.doi.org/10.1037/0021-9010.67.4.401

Schlebusch, S., & Roodt, G. (2008). Assessment centres: Unlocking potential for growth. Johannesburg, South Africa: Knowres Publishing.

SIOP (Society for Industrial and Organizational Psychology). (2003). Principles for the validation and use of personnel selection procedures (4th edn.). Bowling Green, OH: Society for Industrial and Organizational Psychology.

SIOPSA (Society for Industrial and Organisational Psychology of South Africa). (2005). Guidelines for the validation and use of assessment procedures in the workplace. Johannesburg, South Africa: Society for Industrial and Organisational Psychology of South Africa.

Spangenberg, H.H. (1987). Takseersentrums, etiese oorwegings en die IPB. IPB-Mannekragjoernaal, 6(4), 12–18.

Tett, R.P., & Burnett, D.D. (2003). A personality trait-based interactionist model of job performance. Journal of Applied Psychology, 88, 500–517. http://dx.doi.org/10.1037/0021-9010.88.3.500

Thornton, G.C., Rupp, D.E., & Hoffman, B. (2015). Assessment center perspectives for talent management strategies. New York: Routledge.

Appendix 1


Table A1-1: Past Taskforce Members.

Appendix 2


BOX A2-1: Assessor Training Checklist.

Footnotes

1. South Africa uses the term ‘competencies’ to represent a person’s range of knowledge, skills, abilities and other attributes that can be defined behaviourally and therefore observed during, for example, the AC. The International Taskforce on Assessment Centre Guidelines, 2015, use ‘dimensions’ as the common term but have now amended this label to read ‘behavioural constructs’. Note that the South African AC guidelines consider these labels to be synonymous.

2. Contemporary research and advances in AC application have seen ACs evolving to include, and focus on, other elements such as tasks and roles as meaningful units of behavioural information. In these cases, it might be useful to refer to the focal unit of measurement as a ‘behavioural construct’ instead of ‘competency’.

3. Meta-analysis (also known as a validity generalisation study) is a statistical aggregation of multiple local validation studies.

4. For further information, refer to the International Taskforce on Assessment Centre Guidelines, 2015, Section XIII.

5. For further information, refer to the International Taskforce on Assessment Centre Guidelines, 2015, Appendix 1 and Appendix 2.

6. Permission to incorporate and cite content from the International Taskforce on Assessment Centre Guidelines, 2015, was given by SAGE/JOM.


 

Crossref Citations

1. Predictably Irrational Hiring
Noel G. Machado, Jerrin Samuel
NHRD Network Journal  vol: 14  issue: 2  first page: 243  year: 2021  
doi: 10.1177/2631454120988415