About the Author(s)

Llewellyn E. van Zyl Email symbol
Department of Human Performance Management, Faculty of Industrial Engineering and Innovation Sciences, University of Eindhoven, Eindhoven, The Netherlands

Optentia Research Focus Area, Faculty of Economic and Management Sciences, North-West University, Vanderbijlpark, South Africa

Department of Human Resource Management, University of Twente, The Netherlands

Institut für Psychologie, Goethe University, Frankfurt am Main, Germany

Nina M. Junker symbol
Department of Social Psychology, Faculty of Psychology and Sports Science, Goethe University Frankfurt, Frankfurt, Germany


Van Zyl, L.E., & Junker, N.M. (2019). Debating the scientific credibility of industrial and organisational psychology: A rebuttal. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 45(0), a1766. https://doi.org/10.4102/sajip.v45i0.1766


Debating the scientific credibility of industrial and organisational psychology: A rebuttal

Llewellyn E. van Zyl, Nina M. Junker

Received: 24 Nov. 2019; Accepted: 26 Nov. 2019; Published: 12 Dec. 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Problematisation: The credibility and transparency of industrial and organisational psychological (IOP) research within South Africa was recently challenged by Efendic and Van Zyl (2019). The authors briefly showed inconsistencies in statistical results reported by authors of the South African Journal of Industrial Psychology (SAJIP), that various studies were insufficiently powered, that best-practice guidelines for the reporting of results were mostly only partially followed and that no transparency exists with regard to the research process. They demonstrated that authors of the SAJIP may knowingly or unknowingly be engaging in questionable research practices, which directly affects the credibility of both the discipline and the journal. Furthermore, they suggested practical guidelines for both authors and the SAJIP on how this could be managed.

Implications: Based on these suggestions, the authors invited prominent members of the IOP scientific community to provide scholarly commentary on their paper in order to aid in the development of ‘a clear strategy on how [the confidence crisis in IOP] could be managed, what the role of SAJIP is in this process and how SAJIP and its contributors could proactively engage to address these issues’. Seven members of the editorial board and two international scholars provided commentaries in an attempt to further the debate about the nature, causes, consequences and management of the credibility crisis within the South African context.

Purpose: The purpose of this final rebuttal article was to summarise and critically reflect on the commentaries of the nine articles to advance the debate on the confidence crisis within the South African IOP discipline.

Recommendations: All SAJIP’s stakeholders (authors, editors, reviewers, the publication house, universities and the journal) can play an active role in enhancing the credibility of the discipline. It is suggested that SAJIP should develop a clear and structured strategy to promote credible, transparent and ethical research practices within South Africa.

Keywords: Open science; Replication; Reproducibility; Industrial psychology; Organisational psychology; Academic publishing.


For 45 years, the South African Journal of Industrial Psychology (SAJIP) has been at the vanguard of the struggle to advance industrial and organisational psychology (IOP) as a discipline within South Africa (Coetzee & Van Zyl, 2014, 2013; Raubenheimer, 1994). Since its inception in 1974, under the editorships of Proffs. I. van W. Raubenheimer, G. Roodt, M. Coetzee and L.E. van Zyl, the SAJIP has acted as a custodian for demarcating the professional practice and research domains of IOP within South Africa. The South African Journal of Industrial Psychology has continuously acted as a channel not only to disseminate contemporary knowledge and facilitate the continuous professional development of psychologists and advance science, but also to foster communities of practice and setting the standards of research quality for IOP in Africa. As the oldest and most prestigious psychology journal in Africa, it has been a beacon of scientific integrity and advancement by playing a lead role in setting publication standards, developing future-fit editorial processes or policies, building researcher and reviewer capacity, and championing the belief that science should be open, accessible and free (Coetzee & Van Zyl, 2014).

With the dawn of its 45th edition, Efendic and Van Zyl (2019) presented suggestions on how SAJIP could further advance the discipline and enhance the quality of its manuscripts through advocating the adoption of various open science practices, principles, processes and procedures. In their opinion paper, the current replication crisis within psychology was discussed, and arguments were presented as to how this crisis affects IOP. These authors highlighted various systemic causes contributing to the lack of confidence in IOP, ranging from the way in which authors conduct research and process or report results through to how journals, editors and reviewers contribute to such through various editorial processes, policies and practices. Efendic and Van Zyl (2019) briefly demonstrated that these issues are also apparent within SAJIP, highlighting inconsistencies in statistical results reported by authors, that various studies were insufficiently powered, that best-practice guidelines for the reporting of results were mostly only partially followed and that no transparency exists with regard to the research process. In effect, they highlighted that SAJIP’s authorship knowingly or unknowingly engages in what Nosek et al. (2015:1422) call ‘questionable research practices’, and that the journal is not immune to the issues contributing to the confidence crisis in our discipline. But how can that be managed?

Efendic and Van Zyl (2019) argued that both authors and the journal play an important role in enhancing the credibility of the IOP discipline. They provided detailed descriptions of approaches, research practices and techniques authors could employ to enhance the transparency and quality of their manuscripts. These recommendations range from managing statistical issues, reducing analytical flexibility or systematic biases, employing best-practice guidelines for conducting and reporting on statistics to advocating for multi-institutional collaboration.

Furthermore, Efendic and Van Zyl (2019) made various suggestions as to how the journal itself could create a culture that celebrates open research practices, facilitates transparency and enhances the credibility of the discipline. Firstly, they suggested that SAJIP should join the legion of more than 1500 high-impact journals that have already adopted the transparency and open promotion (TOP) guidelines1 (Nosek et al., 2015) to promote open science and transparent research practices. Secondly, they suggested the incentivisation of authors who engage in open science practices. Thirdly, the use of registered reports was encouraged. Authors publish their initial idea and research design (which would be subjected to peer review) in such a registered report prior to conducting their research. Data are collected and analysed after the acceptance of the pre-registered report. The final manuscript is again subject to a (second round of) peer review process, which would finally lead to publication – irrespective of whether the hypotheses were supported or not. Fourthly, the development of best-practice guidelines and publication standards for statistical methods was suggested. Finally, an open, collaborative peer-review process was advocated.

Based on these suggestions, the authors invited prominent members of the IOP scientific community to provide scholarly commentary on their paper in order to aid in the development of (Efendic & Van Zyl, 2019):

[A] clear strategy on how [the confidence crisis in IOP] could be managed, what the role of SAJIP is in this process and how SAJIP and its contributors could proactively engage to address these issues. (p. 2)

Seven members of the SAJIP editorial board and two international scholars responded to the call and provided detailed commentaries and made additional suggestions in response. These commentaries stretched far beyond mere reflections on open science practices; they focused on how the impact, nature and credibility of the SAJIP and how the discipline can be promoted within the South African context.

Therefore, the purpose of this final rebuttal article is to summarise and critically reflect on the commentaries of the nine papers in order to lay the foundation for the development of a clear and structured strategy for SAJIP to promote credible, transparent and ethical research practices within South Africa. In the following, we first provide a general overview of the responses to Efendic and Van Zyl (2019) before summarising each commentary and providing a critical response.

Cautions, commentaries and criticisms: A summary and response

All nine individual papers submitted in response to the Efendic and Van Zyl (2019) extended the debate as to the contributing factors of the credibility crisis and offered further practical suggestions on how this crisis could be managed within the South African context. Despite some differences in views, all the contributing authors unequivocally argued in favour of advancing the IOP discipline’s scientific integrity. They further advocated for reforms in not only how research is conducted by researchers and managed by SAJIP, but fundamentally argued for a change in IOP researchers’ world view and perception of psychological science. The authors also made additional suggestions as to how academic institutions, government and policy could contribute to enhance scientific credibility and transparency within the IOP discipline.

Each paper approached the credibility issue from a different perspective, which extended the debate into areas ranging from the philosophical to the pragmatic issues. Given the fundamentally different approaches or views, individual responses to the commentaries are presented. A summary of the main points of argument of each author is provided, followed by a critical response.

Article 1. The replicability crisis as chance for psychological research and SAJIP.

Summary of commentary

In the first commentary paper, Hernandez-Bark (2019) argues that the replication crisis in psychology presents not only a threat, but also an opportunity for researchers and SAJIP to advance how science is viewed and approached. She argued that debates about the scientific credibility of IOP research only recently started to gain momentum (cf. Köhler & Cortina, 2019), as opposed to other domains, such as social psychology, where the debate started in 2011 because of the academic fraud committed by Diederik Stapel (Abma, 2013). Her argument centres around the opportunities to reform the discipline whilst the iron is proverbially hot through implementing strategies already developed by other sub-disciplines in psychology like advocating for open data, pre-registration, etc. Despite her unequivocal support for the implementation of the suggestions by Efendic and Van Zyl (2019), she poses that these authors negated to address aspects pertaining to access to information and fairness. Firstly, she argues that authors should be afforded the opportunity to access state-of-the art literature and that open access publications should remain a focal point for SAJIP. However, she notes that page fees of open access journals such as SAJIP are exuberantly high and implies that such fees may be a stumbling block for the advancement of science. From this perspective, SAJIP should look for ways to reduce page fees. Hernandez-Bark makes a number of suggestions as to how this could be approached. Secondly, she argues in support of enhancing fairness and equity within SAJIP. She highlights that despite gender equality being prevalent in the authorship of manuscripts, racial diversity remains a significant challenge. She further highlights that most contributing authors are South African, with limited contributions emanating from international academics. This hampers the international reputation and visibility of the journal, which may have negative consequences for being listed in prominent article indexers like the Thomas Reuter Index (ISI). She further proposes a number of potential initiatives to manage such concerns.


It is an established mantra within the positive psychological paradigm that traumatic experiences provide opportunities for personal growth and development (Jorgensen-Graupner & Van Zyl, 2019; Peterson, Park, Pole, D’Andrea, & Seligman, 2008). Extreme adversity, which impedes the adaptive resources of an individual or a group, has the potential to challenge beliefs of reality which could lead to the development of a higher level of functioning (Manning, De Terte, & Stephens, 2015). From Peterson et al.’s (2008) perspective, growth occurs as a result of a struggle with the ‘new reality’ which develops as a result of a traumatic experience, rather than directly from the trauma itself. Given that an academic discipline is a social enterprise, which is made up of individuals, it is not that far-fetched to believe that the actions of Diederik Stapel (and others) and the resulting confidence crisis lead to a collectively shared ‘psychological trauma’. This trauma has the potential to lead to meaningful changes in how the discipline functions. In the aftermath of the Stapel controversy, various initiatives within social- and cognitive psychology developed to manage the consequences of unethical behaviours of prominent scientists. These initiatives developed out of extreme adversity and public scrutiny and aided in strengthening the internal resolve of these disciplines (although the public image is still tainted; Witkowski, 2015). Given that IOP strongly draws from social psychology, an argument can be made that it is experiencing vicarious or ‘secondary post-traumatic growth’. This phenomenon argues that traumatic experiences of an individual transfer to significant others and that the individual’s post-traumatic growth can similarly benefit those significant others (Arnedo & Casellas-Grau, 2016). Therefore, IOP could learn and develop from the post-traumatic growth experiences of its sister discipline social psychology. Specifically, it could embrace the open science practices and open data initiatives and drive the development of an open science culture within the discipline – as advocated by Efendic and Van Zyl (2019). Becoming an early adopter of these practices, whilst scrutiny of the discipline is still in its infancy, could save significant professional damage in the future.

Furthermore, Tedeschi, Shakespeare-Finch, Taku and Calhoun (2018) suggested that post-traumatic growth can be facilitated through deep reflection on matters that are particularly sensitive or difficult to objectively process. In the IOP fraternity within South Africa, one of the most sensitive matters is racial diversity and equality. Hernandez-Bark (2019) reported that only one author in the current and previous volumes of SAJIP comes from a previously disadvantaged group, which is also a significant decrease from what was reported by Coetzee and Van Zyl (2014). In a 5-year cycle (from 2014 to the present), it would seem as though SAJIP has done little to advance racial equality amongst its contributing authors. Although it seems as though there is gender balance, little excuses can be made for racial inequality. When individuals of a particular group feel marginalised or that similar opportunities afforded to another group are not afforded to them, it could result in equalising behaviours such as engaging in unethical practices (Cornelius, 2002). Therefore, it is important to understand the causes of this racial inequality. Deep reflection is required in order to understand how this process should be managed by SAJIP and its contributing authors.

Another factor mentioned by the author that needs to be discussed is the access to information and the financial resources required in order to publish within SAJIP. Although open access supports growth and development amongst the public and other scholars through ‘free science’, there are costs associated with managing the editorial process, copy editing, webhosting and the like (Fehder, Murray, & Stern, 2014). In the traditional ‘closed access’ approach, these costs are covered by the publication house, and a fee is charged to readers for access (Elliot, 2005). Here, the burden of costs is transferred from the author to the reader; however, intellectual property is also lost in the process (Biagioli & Galison, 2014). Therefore, the interests between the virtuous intent of SAJIP to advance science and the profit-driven business-related intents of the publication house (AOSIS Pty Ltd) need to be carefully balanced. As organisational psychologists, we understand that there is always a profit motif in even the most honourable of causes, but financial constraints facing authors should not be barriers to the advancement of science. Therefore, the ideas presented by Hernandez-Bark (2019) to aid authors in managing the financial burden are supported.

Finally, the author shows that internationalisation is lacking within SAJIP. This is a trend that has been established since the inception of the journal. In a reflection on the origins of contributing authors to SAJIP, Coetzee and Van Zyl (2014) noted that within a 10-year cycle, only 27/342 contributing authors resided in countries outside of South Africa. This poses a significant risk to the longevity and impact of the journal. Several recommendations have been suggested on how this symptom of a larger problem should be addressed. These relate to inviting international scholars onto the editorial board, inviting high-impact scholars to host special issues in the journal, inviting papers, or commission research. All of these have been implemented during the past 27 years. Under the editorship of Prof. Gert Rood, a significant drive to get international scholars involved was implemented. Although these were quite successful in the short term, the rapid gains were not sustainable over time. Therefore, the problem does not pertain to the sustainability of the initiatives, but rather to something deeper. We see three roots for the lack of internalisation in SAJIP: (1) the nature and scope of the journal, (2) the journal’s title and (3) that the journal is not listed on the Web of Science Index.

Firstly, the journal’s focus and scope reads (SAJIP, 2019):

The SA Journal of Industrial Psychology (SAJIP) provides a forum for cutting-edge, peer reviewed research in all fields related to investigations into the ways in which the individual can balance their daily activities (socially, culturally or linguistically) against the larger context of corporate, organizational and institutional values. (n.p.)

Reflecting upon this generic statement, the journal’s scope does little to differentiate itself from any other managerial or business-related journal within the global context. In fact, the scope does not even highlight the psychological nature of the discipline. The SAJIP would need to redefine its scope and focus and identify a proverbial ‘niche’ for itself (Veldsman, 2019). By refining its scope, SAJIP provides context to the international community that the journal is not just a general management journal that would accept any paper that is submitted to it and would aid in demarketing its territory within the larger discipline.

Secondly, the title of the journal implies that the scope of the journal is confined to the South African context. Despite its very broad scope, the journal’s title perceptually limits the impact of the journal in the eyes of the international community. Raubenheimer (1994) explained that the first title of the journal in 1974 was Perspectives in Industrial Psychology, which later changed to the Journal of Industrial Psychology (in 1985). When the journal adopted the dual distribution (hard copy and online copy) channel in 2002, it was again changed to the South African Journal of Industrial Psychology (SAJIP, 2019). Both the first two names would have perceptually been a better choice as they demarcate clear boundaries of the nature and scope of the journal. Furthermore, removing the country from the title could increase the recognition as a truly ‘international’ journal and the attraction for international scholars. Given the long history of the current title, the suggestion is not to change the name, but rather to think about a slogan, one that encompasses the niche area to be identified and that advocates the international stature of the journal.

Finally, not being listed in the main Thomas Reuter Web of Science (WOS/ISI) index poses challenges for SAJIP. In the past 7 years, SAJIP has implemented various initiatives to enhance its scientific stature, to facilitate an inclusion into the main Thomas Reuter Index. Although SAJIP is indexed in all of the major indexers, being excluded from the main WOS/ISI list negatively impacts how the journal is perceived by the international community (Catling, Mason, & Upton, 2009). As such, in 2014 SAJIP was submitted for evaluation to Thomas Reuter, for potential inclusion into the main WOS/ISI index. The evaluation process took 3 years and resulted in SAJIP being listed in the WOS/ISI’s ‘Emerging Sources Citation Index’ (SAJIP, 2019). This index places SAJIP in a ‘waiting list’ for inclusion into the main WOS/ISI list. Although this is a significant achievement, the Emerging Sources Citation Index does not produce an annual (public) impact factor (Somoza-Fernández, Rodríguez-Gairín, & Urbano, 2018), which implies that submissions to SAJIP will not be prioritised by researchers within the international community. Academics within the international context must publish in high-impact journals in order to get tenured or to be promoted, where submissions to journals like SAJIP are actively discouraged (Berenbaum, 2019). By not being listed in the WOS/ISI index, SAJIP is isolated from the international community. Therefore, active efforts need to be exerted by SAJIP to enhance its international reputation.

Article 2. A reply from a ‘pracademic’: It is not all mischief and there is scope to educate budding authors.

Summary of commentary

In the second paper, Bussin (2019) provides a personal narrative of his journey as a ‘budding academic’. As a pragmatist, Bussin describes the challenges facing young academics in the publication process, which range from editorial politics (e.g. editors requesting that papers from their journal should be cited to get published), ignorance and the lack of competence of researchers, the unclear requirements of the publication process, the selfish and destructive nature of reviewers, as well as the lack of support/mentorship. He further argues that the authorship of SAJIP is primarily localised to the South African context, which poses various concerns. All of these factors combined lead young researchers, unknowingly and out of ignorance, to engage in questionable research practices. He advocates for a stronger focus on competence development of both researchers and reviewers, more transparent editorial processes and a space on SAJIP where young researchers and practitioners can be mentored in the research process. He further provides structured and practical suggestions to aid in implementing such a process.


Although there is full agreement with all that Bussin (2019) argued, he touches two main issues which confront every researcher: (1) their competence in scientific writing, reasoning and, of course, data analysis and (2) the effect of the destructive nature of some reviewers. Firstly, one of the major factors influencing the extent towards which individuals engage in questionable research practices is a lack of competence and adequate training. Within the South African context, psychologists are trained from the Boulder or ‘science-practitioner’ model (Van Zyl, Deacon, & Rothmann, 2010; Van Zyl, Nel, Stander, & Rothmann, 2016). In theory, psychologists are trained as behavioural scientists that employ scientific methods to predict behaviour, and aid individuals, groups, organisations and communities to develop (Jorgensen-Graupner & Van Zyl, 2019). Science-practitioners are supposedly equipped with the quantitative and qualitative methodological skills, statistical processing abilities and with practice domain-specific knowledge to be effective psychologists (Van Zyl et al., 2016). However, in practice, this is rarely the case. Most psychology programmes in South Africa barely touch on research methods or scientific reasoning at an undergraduate level, and even less incorporate functional statistics as a consistent major throughout students’ academic development (Van Der Westhuizen, 2015). At an honours level, students must write a research proposal, engage in a small applied research project or write a systematic literature review as a final assessment outcome for the course.

At master’s level very little is done to aid students in mastering the skills to process or analyse their own data (both numeric and textual), and scientific reasoning is seen as a tertiary skill (only necessary for the completion of the master’s degree). Furthermore, master’s students in psychology perceive the ‘dissertation’ as nothing more than a barrier withholding them from the internship or Health Professions Council of South Africa (HPCSA) registration (Van Der Westhuizen, 2015). Not only does this affect their attitude towards research, but also fosters active avoidance in, and a devaluation of, the associated analytical techniques, tools and processes needed to become competent science-practitioners (Chumwichan & Siriparp, 2016; Köhler & Cortina, 2019). This mindset follows individuals into their chosen careers, where some become practitioners and others stumble into academia (Maree, 2019). Therefore, those who become junior academics often do not have the necessary skills and abilities to effectively start running their own research projects and are usually largely dependent on senior staff or promoters to aid in this process (Berenbaum, 2019; Brechelmacher, Park, Ates, & Campbell, 2015). They lack a thorough understanding of scientific reasoning, research design, research methodology, statistical analysis and academic writing (Brechelmacher et al., 2015; Nicholas et al., 2017). From personal experience within the South African academic environment, it became clear that most students and junior academics struggle with writing an academic paper, and usually ‘outsource’ the analytics to statisticians or to their supervisors. Although distributing the workload is recommended in principle in practice, outsourcing data analysis can result in a lack of understanding of the outcome of these analyses. This results in results being misinterpreted, or over-generalised, and that best practices in both writing and analytics are usually never followed.

Although a dark picture is painted, it is not all thunderstorms and lightning. There has been an increased drive from some academic institutions to upskill the IOP research community within South Africa during the past 5 years. The University of South Africa, for example, hosts various workshops on system psychodynamics and qualitative data analytics. The Optentia Research Programme at the North-West University regularly hosts workshops from international scholars on advanced statistical analyses and research design. Given these types of initiatives, there has been a large increase in more ‘advanced’ studies being submitted to SAJIP on a yearly basis. Thus, there is hope for further development and advancement, but more systemic initiatives need to be introduced and managed in order to ensure that students and academics are adequately equipped to manage their own projects.

Secondly, Bussin mentions the destructive nature of the review process and the reviewers. Reviewers by their mere nature are critical, and the process leaves ‘little incentive for altruistic behaviour’ (D’Andrea & O’Dwyer, 2017). It is a widely acknowledged fact that the peer-reviewed system is flawed but, unfortunately, it is the only system we currently have to assess the validity, quality and originality of academic papers (Offutt, 2018; Smith, 2006; Wingfield, 2018). Despite its flaws, when it works, it works well. When excellent reviewers are present, they do not just criticise and provide vague and baseless comments, but they think along with the author on how to improve the paper (given the limitations of the selected design). Based on these developmental inputs, the final published manuscripts are usually of a much higher standard, which implies that such manuscripts greatly benefitted from the review process. The primary aim of the review process is to evaluate papers and scrutinise them against certain academic merits, and to help authors grow, learn and develop under the guiding wings of an anonymous mentor. However, the journal also needs to safeguard its contributing authors against the harsh realities which empathyless reviewers may present. Journal editors need to act as custodians of virtue, and filter out the criticism, and position such in a constructive and developmental manner. This will not only empower the author(s) but will start to create a cycle of positive reciprocity.

Article 3. On the future of SAJIP.

Summary of commentary

In the third paper, Cilliers (2019) provides a personal reflective narrative on his view of SAJIP. In his reflection, he implies that competence development is a key factor to the advancement of science. However, his main message raises a concern with regard to Efendic and Van Zyl’s (2019) paper and, resultantly, SAJIP. Similar to other papers in this special section, Cilliers argues that there is an overemphasis on quantitative papers and statistical analyses in SAJIP. He argues that quantitative research might be seen as a prerequisite of being considered ‘scientific’. Within SAJIP, he mentions, there seems to be limited to no scope for basic or theoretical research. He warns Efendic and Van Zyl (2019) as well as SAJIP to be careful in excluding or devaluing other research designs, as this would distract from the nature of the scientific process. From his perspective, SAJIP should always be unbiased and ensure balance.


In contrast to what Cilliers (2019) indicated, SAJIP has always adopted an unbiased approach towards the management of manuscripts from different research philosophies and designs. In his farewell editorial, Raubenheimer (1994, p. 1), the founding editor of the now SAJIP, argued that the journal is built on four fundamental principles: ‘(1) be hosted by a university to ensure consistency and congruence, (2) maintain an ethos of action, (3) be non-ideological, and (4) always be independent and neutral’. Despite changes in editors, editorial board members and refinement in the publication guidelines, SAJIP has always embodied its status as a non-ideological, neutral and independent body. To our knowledge, SAJIP has never clearly advocated a preference for a particular research design. In fact, SAJIP actively welcomes qualitative research (Coetzee, 2019). Although we acknowledge the underlying concerns, we would rather argue that an emphasis on quantitative manuscripts may be a function of the times. In their review of two decades of qualitative papers published in the South African Journal of Psychology, the South African Journal of Human Resource Management and SAJIP, O’Neil and Koekemoer (2016) found that out of 1744 published manuscripts, only 242 were qualitative in nature. Similar findings were presented by Coetzee and Van Zyl (2014), who indicated that only 61 out of 342 articles published in SAJIP between 2004 and 2013 employed qualitative designs. From an objective perspective, it is understandable that a believe may exist that SAJIP favours quantitative over qualitative manuscripts; however, Porter (1989) provided context to this phenomenon. He argued that researchers’ preference for quantitative approaches pertains an obsession with ‘counting’. He postulated that researchers naively embodied Drucker’s (1954, reprint in 2012) managerial cliché that if a phenomenon cannot be measured, then it cannot effectively be managed. It is this paradigmatic fallacy, fuelled by the time-consuming nature, perceptive difficulty, lack in professional training and lack in ‘templates’ or ‘recipes’ of qualitative designs, that results in researchers preferring quantitative over qualitative designs (Teddlie & Tashakkori, 2009).

Personally, we argue that the research design should logically follow from the research question. That is, scholars should choose the research design that is best suited to answer their research question. Following on from this, we perceive the research design to be a ‘means to an end’, but not the end in itself (see also Coetzee, 2019). Moreover, this logic implies that (1) researchers should theoretically be competent in every available research design2 to make a sophisticated decision and (2) that they should seek collaborators who are competent in those research designs that are beyond their own expertise.

Finally, although we acknowledge that Efendic and Van Zyl’s (2019) paper placed a large emphasis on quantitative designs, there is also a need to ensure that qualitative studies are transparent and replicateable. In an analysis of 52 published qualitative manuscripts within prominent IOP journals, Aguinis and Solarino (2019) found that none were sufficiently transparent to allow for exact, conceptual or empirical replication. These authors found that none of these papers descripted the position of the researcher, a thorough explanation about the sampling procedures and nature, the relative importance of the cases or participants, a thorough documentation regarding the nature of interaction between participants and with the researchers and how the saturation point was defined and considered, nor did they highlight the unexpected opportunities or challenges, or how the power (im)balance was managed. Aguinis and Solarino (2019) further showed that data coding procedures were ill-defined at best, that data analyses methods were incomplete or vague, and that none of the authors of these papers disclosed their data. In fact, qualitative research may be in a higher need for reform than quantitative studies (Aguinis & Solarino, 2019; Plakoyiannaki, Wei, & Prashantham, 2019).

Article 4. SA Journal of Industrial Psychology: Annual editorial overview 2019.

Summary of commentary

In her final editorial as editor-in-chief of SAJIP, Coetzee (2019) provides insightful comments as to the editorial process in Article 4. She confirms and expands upon the ideas mentioned in Efendic and Van Zyl (2019) and offers further interpretations as to what should constitute rigour and relevance within the IOP discipline. She poses five concrete positions followed by a number of recommendations. Firstly, rejections from the journal mostly relate to a lack of methodological rigor, lack of theoretical contextualisation and misalignment with the scope of the journal. Secondly, statistical methods and analytics should not drive the research process to the extent where the methods become the contribution. Thirdly, researchers need to become competent in the ever-increasing complexities of these methods in order to understand how these could be used. Fourthly, theory generation should be advanced in a rigorous and sophisticated manner employing the right ‘tools’ to answer the right ‘questions’. Fifthly, similar to Cilliers (2019), she argues that SAJIP should always find a balance between quantitative, qualitative, and mixed-method designs. Finally, she calls for the editorial board to use their discretion with regard to the implementation of Efendic and Van Zyl’s (and the other authors in this section) suggestions. Fundamentally, she implies that what is best for the nature and advancement of IOP science should always be placed at the forefront of any strategic initiatives that SAJIP pursues.


Two themes emanating from Coetzee’s (2019) response resonate with us. The first theme, relating to the appreciation and balance of other methodological designs, has already been discussed in the response to Article 3, Cilliers (2019). The second theme relates to how statistical or analytical strategies employed should only be regarded as ‘tools’ to be used in order to make sense of data; it should be viewed as a means to a proverbial end, and not an end to a means. This theme expands our discussion of Cilliers (2019) above perfectly and extends this discussion to a need for not only authors, but also for reviewers and editors, to develop their statistical skills or the analytical methods that they employ.

On the one hand, it seems as though researchers drawing from the quantitative paradigm are ever increasing the complexity of the statistical methods employed in their papers. This rapid adoption of advanced analytics could be seen as an attempt to enhance the level of perceptive sophistication of a given paper or to ensure that the analytical complexity justifies a theoretical contribution (Auyang, 1998; Nickerson, 2018; Pichard, 2015. Either way, it seems as though some may use advanced statistical analytics as a defence against the complexity of human behaviour, or to mystify reviewers or editors (Abramson, 2005; Delgado, Garretson & Delgado, 2019; Friedman & Brown, 2018).

On the other hand, it seems as though the types of research questions being asked are becoming more complex, which requires more sophisticated analytical techniques, and competent reviewers or editors to effectively evaluate such papers. Given the rapid increase in computational power of personal computers, more advanced statistical and qualitative analytical software can be employed in order to aid researchers in answering more complex research questions (Van de Schoot et al., 2014). For example, Bayesian statistics can be traced back to 1763 (Florens, 2019); however, it could not adequately be employed up until recently as the computational capacity to estimate posterior distributions based on priors was simply not possible 10 years ago (Van de Schoot et al., 2014). Similarly, photo-ethnography or visual voice analyses require the use of sophisticated software to code and analyse static images or videos (Keats, 2009), which was not possible until recently. The rise in computational power allows us to investigate the ever-increasing complexity of the human phenomenon. However, the competence of those who evaluate, review or make decisions on papers also plays a major role in the publishing process (Ardakan, Mirzaie, & Sheikhshoaei, 2011; Bussin, 2019). It is clear that reviewers and/or section editors in SAJIP also might not always have the required competence to understand the latest analytical methods and, therefore, skip over such methods3. This is not a localised matter however. In a recent linguistic evaluation of 1716 peer-review reports, Delgado et al. (2019) found that reviewer comments largely related to how a study is theoretically framed, on the formulation of research questions and methodology. However, very few reviewers made comments about the actual findings or results of the study.

A further explanation pertains to the lack of statistical literacy amongst researchers and reviewers. A recent study revealed that only 14% of graduate students and 30% of professors stated that their level of statistical training was adequate (Loewen et al., 2014). This is largely because of experienced ‘statistics anxiety’, which results in researchers avoiding developing their skills in these areas (Counsell, Cribbie, & Harlow, 2016; Van Der Westhuizen, 2015). Sharpe (2013) further argued that researchers, journal editors and reviewers resist adopting new statistical innovations because:

[R]esistance [sic] are reviewed, specifically a lack of awareness of statistical developments, the failure of journal editors to mandate change, publish or perish pressures, the unavailability of user friendly software, inadequate education in statistics, and psychological factors. (p. 572)

Taken together, researchers, editors and reviewers need to become more acquainted with the latest methodological advancements or introduce safeguards in order to buffer against the impact of poor statistical literacy, and the adoption of more modern statistical analytical approaches, on the quality of the discipline.

The same argument applies to the competence in qualitative analytical approaches. Despite the criticisms presented about the poor training of psychologists in quantitative methods and the associated implications, researchers are still better equipped and trained to engage in statistical analyses than in any form of qualitative analytics (Ponterotto, 2010). Given that the field of psychology as well as the IOP subdiscipline is dominated by quantitative studies (cf. Coetzee & Van Zyl, 2014; O’Niel & Koekemoer, 2016), various best-practice reporting guidelines and analytical tutorials exist to aid researchers in their analytical strategies. However, the same is not as prolifically or readily available for qualitative designs (Patton, 1999). Moreover, and given the preference for quantitative methods within academia, limited focus is placed on the training of organisational psychologists in qualitative design and analytic methods (Murtonen, 2015). This largely affects not only their competence in conducting qualitative research but also the quality of potential manuscript reviews using these types of designs.

As such, we concur with the sentiment of Coetzee (2019), and the subsequent one from Hoole (2019, to be discussed later), questioning reviewer competence, but would like to advocate that (1) authors need to employ the most appropriate method for addressing their research questions or to test their hypotheses (Allan & Cribbie, 2013) and (2) that authors, reviewers and editors need to become aware of their own limitations and engage in means to actively address such issues.

Article 5. Indeterminateness in industrial and organisation psychological research: A root metaphor analysis.

Summary of commentary

Although Crous (2019) fully supports and values the recommendations of Efendic and Van Zyl (2019), he argues that the issues underlying the confidence crisis pertain less to the systemic issues mentioned in the article, but rather arise from the world view from which researchers and practitioners operate. From this perspective, he argues that the IOP discipline is falsely positioned or believed to be mechanistic or precise in nature (i.e. a strong focus on precision and prediction). He argues that, contrary to what IOP scientists believe, the articles that find their way into SAJIP arise from formism rather than mechanism. Formism argues that the world is an inter-related system that contains various forms, essences and natures. Psychological investigations from this perspective involved endeavours aimed at the identification of the fixed nature, essence and forms of psychological processes. The main strength of formism is that it is flexible and provides enough scope to understand complex behaviour; however, it lacks precision. He concludes with a recommendation as to how to formalise formism through the adoption of Efendic and Van Zyl’s (2019) guidelines.


Crous (2019) provided a philosophical extension to the current discussion and, doing so, offers an interesting new angle on the importance of Efendic and Van Zyl’s (2019) recommendations. He highlights, like Maree (2019) below, that a significant shift needs to take place in the world view of psychological practitioners. In support of his argument, IOP is not a science built on precision and accuracy, although thorough attempts have been made to position it as such throughout the 20th century (Grand et al., 2018). A mere look at the two decades of volumes in SAJIP would confirm that studies testing similar models, in different contexts, provide different results. Although various explanations are usually provided within each manuscript justifying the differences, fundamentally it reduces to the complexity of human nature and the impact of context or environment (Gelfand, Aycan, Erez, & Leung, 2017). Although these factors cannot always be controlled for (Grand et al., 2018), it is imperative that processes and procedures should be put in place to empower IOP researchers to make ‘good enough’ assessments, given the contextual constraints in which they function. This implies that clearer guidelines should be introduced to guide the methodologies, analytical tools and techniques researchers employ in order to assist them with making better interpretations and drawing more realistic conclusions from their studies. Most importantly, as not only one type of behaviour causes another (as could be assumed in a mechanistic worldview), more transparency is needed with regard to those behaviours and aspects that are unrelated to an investigated outcome as well as those that were not originally hypothesised, but emerged to be associated in any case.

Article 6. Avoiding the elephant in the room: The real reasons behind our research crisis.

Summary of commentary

In the sixth paper, Hoole (2019) argues for a higher-level view to understand the primary causes of the crisis in confidence. She argues that attributing factors could be categorised as emanating from either (1) systemic or institutional problems or (2) research methods, policies and ethics. With regard to systemic or institutional problems, she argues that tertiary educational institutions within South Africa are placed under ever-increasing pressures by the National Development Plan (NDP, 2013) to increase doctoral students across the country to 5000 per annum by 2030. Although this is an admirable goal, she argues that meeting the demands of the NDP (2013) is rather idealistic and nearly impossible to achieve. Furthermore, universities have started to place strong emphasis on postgraduate education, which is usually more labour intensive and time consuming. This emphasis further increases the work pressure of academics and distracts from research time. She argues that because of this pressure, various systemic issues have started to manifest. Given that both the primary and secondary educational environments are riddled with challenges, poorer quality students enter the higher education environment, which, in turn, requires more effort from universities to upskill and educate these individuals. Secondly, given the change in governmental policy regarding access to higher education, a large number of students enter universities. This leads to more job demands for academics – but, at the same time, physical governmental resources are ever decreasing. Furthermore, governmental subsidies and funding within the Higher Education are further drastically curbed. Universities therefore need to look for alternative income streams in order to ensure fiscal sustainability.

One such way is by placing more emphasis on academics to publish to capitalise on government’s incentive scheme for research outputs. Universities place high research quotas per annum on academics in order to ensure their financial sustainability over time. Government’s incentive scheme, coupled with high job demands of academics and the pressure from universities to publish, increases the opportunities for academic fraud and questionable research practices. She further argues that the annual publication quotas set for academics by universities further aggravate this problem, as these quotas not only affect the quality of the work but also the potential impact thereof. This also leads to undesirable research practices, such as publishing in predatory journals.

She further highlights another concern: ageing academics. Most competent and highly skilled researchers within South Africa are nearing retirement, and the younger generation is not adequately skilled to fill this proverbial gap. The talent pool is decreasing at an alarming rate as academia cannot draw high-quality academics but ‘loses’ those talents to industry because of the low academic salaries and the fierce competition for talent within the market.

Secondly, she reiterates Efendic and Van Zyl’s (2019) call that issues also arise from research methods, ethics and publication policies. Similar to Bussin (2019), Cilliers (2019) and Coetzee (2019), she argues that the crisis in confidence is one of the consequences of an (in)competence and poor training in research methods and qualitative or statistical analyses. Given that most academics are neither adequately skilled nor structurally empowered to increase their competence, the ideals of the NDP (2013) are idealistic and unachievable unless drastic measures are introduced to address such issues. Her argument does not only pertain to academics as authors, but also to academics as reviewers. Related to this point, she shows that reviewers are not necessarily competent in the latest designs and analytics, which results in papers not critically evaluated. Furthermore, she argues that SAJIP does not have rigorous selection criteria for reviewers and those who are in the system are not adequately recognised or rewarded. This leads to poorer quality reviews. As such, quality control in the review process remains an issue that needs to be addressed. She further makes good suggestions as to what academic institutions can do to reduce the pressure for publications; however, this is not something that SAJIP can directly address4.

Finally, although she supports Efendic and Van Zyl’s (2019) suggestions early on in the paper, she argues that implementing such:

[M]ay be a bit ambitious, in the sense that expecting journal editors and reviewers to scrutinize every manuscript’s metadata and syntaxes may not be feasible, especially for journals that receive a high number of submissions. It is good practice to have this information available in the case where the manuscript is not clear and does not adhere to the normal requirements, but it should not be up to the journal’s review team alone to try and solve the crisis. (p. 1)


In the preceding paragraph, Hoole (2019) cautions the implementation of open science practices within SAJIP as it may result in an additional hindering demand being placed on already overworked and undervalued reviewers. Here, she incorrectly interprets the nature and intent of the open science framework. Efendic and Van Zyl (2019) argued that more transparent research practices will naturally lead to researchers becoming more open and honest about the approaches that they employ and justify the decisions that they have made. It also means that reviewers – and researchers who are interested in replicating a study’s findings – have access to information not readily available to make more informed decisions. The fact that the metadata, the pre-registration protocols, the analysis syntaxes and the like are available does not mean that reviewers are required to review such paper – but we recommend that they do. The open science initiatives are not actively monitored by anyone, and it is not an instrument to act as a watchdog for researchers, but rather it is a means to facilitate learning (even years after a manuscript was published) and enhance transparency (e.g. which hypotheses were formulated a priori? Which other variables were measured? Was the analysis strategy changed in the publication process and if so, why?). Fundamentally, it is driven by an intrinsically motivated process where the desire to advance science is valued. Similarly, affording the opportunity to readers to have access to supplementary material (such as the Mplus Syntaxes) will aid them to learn new skills and techniques without having to spend hundreds of euros (and hours) on specialised trainings. Personally, most of what we have learnt about Mplus or R, for example, was from publications where the syntaxes and data were available. This way the author(s) advance science, but readers are also able to upskill themselves in the process. Efendic and Van Zyl (2019) argued that Level 1 of the TOP guidelines should be implemented, which only implies that authors need to disclose whether or not materials are available. In effect it is just about disclosure. Implementing these guidelines (of which some have already been implemented) encourages professional development of academics and students, enhances transparency and increases the academic standing of the journal within the international community.

Furthermore, although Hoole (2019) clearly articulated a number of challenges, which may contribute to the crisis in confidence, most of these (such as the NDP, the university policies on research quotas, the retirement age of academics, extreme influx of students, high job demands of academics and the like) are not easily addressed. It is also beyond the scope of SAJIP and most of its stakeholders to address such issues. The issues pertaining to research methods – and the competence or recognition of reviewers – can and must be managed by SAJIP. Hoole (2019) argues for setting formal selection and minimum standards for reviewers. This idea is easy to implement although difficult to manage given that SAJIP’s reviewer base is already relatively small. We would advise that minimum criteria should be set in terms of academic qualification (PhD or equivalent; or showing progress towards completing a PhD degree) and there should be at least one academic publication within SAJIP. For practice-orientated papers, it is suggested that reviewers must have at least a master’s degree, with ‘adequate’ experience in the related domain. However, it is unclear how this would affect the already decreasing reviewer base. It is however important to note that according to SAJIP (2019), several initiatives have already been implemented, ranging from upskilling of junior reviewers, employing statistical consulting editors through to incentivising psychologists with continuous professional development points. These initiatives have not yielded high levels of return for the journal. Other issues relating to the review process, the competence of reviewers and the methodological training have already been discussed in response to Bussin (2019) and Coetzee (2019). Another possible alternative to manage this process that was mentioned by Efendic and Van Zyl (2019) was the experimentation of an open collaborative review process (similar to that of Frontiers) or to make the peer-review reports public after publication (Marshall, Shekelle, Leatherman, & Brook, 2000).

Article 7. Burning the straw man: What exactly is psychological science?

Summary of commentary

In the seventh paper, Maree (2019), like Crous (2019), argues that the problem in psychological science lies in stakeholders’ understanding of what psychological ‘science’ constitutes. He argues that the replication crisis is a result of a particular view of what psychologists believe science is and should be. Although he supports the suggestions made by Efendic and Van Zyl (2019), he heeds caution as focusing only on replication is a poor criterion for scientific character. He argues that psychologists need to redefine (in their own minds) what psychological science constitutes and highlights a number of issues that impact how science is viewed and perceived. Primarily, Maree (2019) mentions that there is a shared, collective belief between psychologists, the public and a large proportion of academics within South Africa that only ‘quantitative research’ equates ‘science’. The value of the qualitative research paradigm is therefore largely undermined within the collective consciousness of the psychological fraternity. Given that psychologists within South Africa are trained within the scientist-practitioner (SP) framework, coupled with this unrealistic view of what constitutes ‘science’, after graduation psychologists either see themselves as scientists or as practitioners. This SP divide is ever increasing. He argues that a thorough integration between the ‘psychologist as scientist’ and the ‘psychologist as practitioner’ will result in quality and sustainability of psychology as a science. However, he mentions that psychologists within South Africa view the integration of science into practice (or practice into science) as nothing more than an ‘ideal’ which is not practically possible in real-world scenarios. Practitioners are trained to believe that measurement and empirical justification are the only criteria used to constitute ‘science’. Psychologists have a pathological obsession with measurement, but measurement does not equate to something being a ‘science’. He further states that ‘certain things are measurable whilst others are not, depending on the level of analysis’. Given that psychologists are also not adequately trained in these matters, it causes an even further rift between science and practice. Similarly, he argued that a major difference exists between views of quantitative and qualitative research realities, where proponents of the one ‘lambast’ supporters of the other. This is yet another divide that further splits scientists and practitioners.

He concludes his commentary with the following (Maree, 2019):

If it can and should be measured, then by all means, but if it should be talked to and talked about in a process of claim-counter-claim then even dialogical, interpretative or reconstructive processes can be utilised to describe and explain realities. Whenever the scientist claims something about reality, natural, psychosocial or otherwise, the ensuing debate between people and reality, and people and people constitutes science. (p. 2)


Maree (2019) highlights a pertinent concern which, in principle, was echoed by Cilliers (2019), Coetzee (2019), Crous (2019) and Veldsman (2019). Within the South African context, psychologists equate the concept of ‘empirical research’ with ‘quantitative research designs’. As a fundamental part of the scientific method (and therefore ‘science’), empirical research refers to a means to gain knowledge, clarify understanding or explain a phenomenon through (in)direct observations of experiences (Florens, 2019). This implies that knowledge can be obtained through various forms of both quantitative and qualitative means. However, given the emphasis on quantitative research training within the formal education of psychologists, it is not surprising that students believe that ‘if it can’t be measured then it is not science’ (Murtonen, 2015). This belief continues into the working world and even into academia. From O’Neil and Koekemer’s (2016) study, it is clear that this ‘tradition’ has been apparent for more than two decades, where only 13.87% of publications in prominent journals in South Africa employed qualitative designs.

Although this belief will be difficult to change in the short term, both SAJIP and its general psychology counterpart, the South African Journal of Psychology, could play a major role in redefining this mindset. Academic journals play an important role in charting the future of a given profession for articulating practice or research domains and through challenging conversion (Coetzee & Van Zyl, 2014). Therefore, SAJIP could heed the call of these authors and implement a process where qualitative research and mixed-method designs are showcased and/or valued. Furthermore, best-practice guidelines for qualitative analytical techniques (such as thematic content analyses, interpretative phenomenological analyses, photo-voice, etc.), hosting free webinars on design as well as clearer publication guidelines could also make a significant difference.

Article 8. Reducing our dependence on null hypothesis testing: A key to enhance the reproducibility and credibility of our science.

Summary of commentary

In the penultimate commentary, Murphy (2019) argues that the over-dependence on null hypotheses testing (NHT) is a major cause of concern within IOP science. He argues that the over-reliance on the NHT results in two main issues: (1) the inadequate power of studies and (2) the strong temptation to engage in questionable research practices ‘in search for the significant p-value’. The NHT paradigm dominates psychological research, with more than 95% of papers in psychology employing the null hypotheses as a criterion for evaluating results (Bakker, Van Dijk, & Wicherst, 2012). As a result, authors, amongst others, abandon studies with non-significant results. Reviewers and editors reject papers with non-significant findings, instruct authors to increase sample sizes or to test other hypotheses, etc. He postulates that many of the suggestions of Efendic and Van Zyl (2019) may assist in managing the over-reliance on NHT. However, he also notes that their suggestions may not suffice if the NHT obsession persists.

He further highlights two issues with NHT. Firstly, it does not test null associations that individuals actually believe to be credible or real (e.g. that interventions have NO effect or that concepts are completely uncorrelated). He argues that treatments may in fact have an effect, but it is so small that this effect is not measurable. Secondly, the outcomes of NHT are mostly misinterpreted by researchers. If one fails to reject the null hypothesis, researchers conclude that, for example, the intervention did not work. This, in effect, implies that researchers believe that the null hypothesis physically explains a manifested phenomenon relating to one’s results, which is incorrect. He argues that not rejecting the null hypotheses only shows that the research design was not sufficiently powered.

Finally, Murphy (2019) supports the implementation of Efendic and Van Zyl’s (2019) suggestions but notes that doing so will be difficult to manage as it requires a fundamental shift in the way authors, reviewers and editors view science and conduct their research work.


Murphy (2019) provides an interesting insight as to how the over-reliance on NHT fuels the crisis in confidence within psychology. There is strong agreement between both his primary arguments and the suggestions he made from the literature (Hales, 2016; Konijn et al., 2015; Meyer, Van Witteloostuijn, & Beugelsdijk, 2017). Although weaning the discipline off NHT may not be an easy task, empowering researchers to enhance the rigour of their empirical designs may be a first step in the right direction (Meyer et al., 2017). Researchers could start employing multi-study designs or experimental approaches as part of their research process, and SAJIP could introduce reporting policies that encourage exact reporting and encourage papers with null results (Hales, 2016; Meyer et al., 2017). Furthermore, more robustness checks could be reported (e.g. alternative functional regression models, introducing control variables) and authors are encouraged to support and provide interpretations based on all available information. Finally, SAJIP could encourage the use of more modern techniques such as Bayesian analyses as a complete alternative to traditional NHT approaches.

Article 9. Examining the strings on out [sic our] violins whilst Rome is burning: A rebuttal.

In the final commentary, Veldsman (2019) presents a counter-argument to the matters raised in Efendic and Van Zyl (2019). He proposes that the crisis in confidence is merely a symptom of a deeper dysfunctionality within scientific practice. He argues that this symptom is the result of a dynamic interaction between three meta-crises: (1) the growing irrelevance of the discipline, (2) an obsolete, constricting research paradigm and (3) toxic dynamics within the academic system.

From his perspective, IOP science is growing in its irrelevance, both as a discipline and as a science. He highlights that research within this domain fails to adapt or to address contemporary changes in the world of work. Academic investigations into phenomena are usually ex post facto or driven by a continued drive for scientific refinement of theory. Industrial and organisational psychology research is therefore not readily able to produce answers to the complex problems facing the world and the relevance of the ‘evidence-based practices’ it generates is under threat. He sees the primary reason of this threat in a lack of timely developing these practices. Here, he argues that practice seems to be driving innovation in the discipline, where science is in an ever-increasing race to ‘catch up’.

Secondly, he argues that IOP draws from an outdated research paradigm that is obsessed with the testing and verification of claims. He presents three arguments in support of his claim. Veldsman (2019) argues that IOPs have an obsolete world view framing their thinking, which fundamentally affects the way they approach research. The discipline views the world in a linear, mechanistic manner focused on understanding cause–effect relationships. Like Crous (2019), he argues that a more realistic view of the world is required. He suggests that IOP researchers view the world as a complex, chaotic and inter-related system, where the researcher’s role is to understand, explain and predict to make sense of the chaos. He also mentions that IOPs employ an inadequate research design paradigm given the pace of change within the world of work. The obsession with testing, verifying and retesting claims distracts from core issues in practice and affects the time it takes to present viable, practical evidence-based solutions. Veldsman (2019) argues in favour of adopting a ‘falsification research paradigm’, where theory is accepted as being true and implemented in practice until evidence to the contrary is found during application. Doing so would lead to refinement and adaptation. Lastly, similar to Crous (2019) and Maree (2019), Veldsman (2019) argues that IOP’s drive to quantify everything is the main reason for its downfall. He mentions that ‘not everything that matters can be measured’.

Thirdly, Veldsman (2017) argues that the academic system and the research community are toxic. Like most of the authors in this special section, he mentions that academics are pressured to publish and become specialists in topics that are or have become irrelevant to mainstream practice. Researchers pursue safe, low-risk topics in order to enhance the probability of getting published. They manipulate results, drop hypotheses and transform data in order to ensure that significant results are achieved. This also gives rise to ever-increasing unethical publication practices. Finally, given all this, the academic system undermines the well-being of both academics and students.

He argues that the ‘recommendations may even worsen the mounting pressures on academics by imposing even more, burdensome, self-serving, research processes and standards, having being already captured by trivial statistical minutiae’. The root cause of the matters needs to be addressed.


Veldsman (2019) argued that the crisis in confidence is merely a symptom of a larger problem within the proverbial DNA of the IOP discipline. Although we agree to some extent, various arguments presented by the author are ungrounded and hastily suggested with limited understanding as to the consequence thereof. Firstly, in his final argument he argues that Efendic and Van Zyl’s (2019) suggestions should be negated because they would inherently increase the job-related demands placed on authors5. However, taking a strategic view at the discipline and at SAJIP, attempts must be made to enhance the quality of manuscripts and the publication process to enhance the credibility of the discipline and the quality of the journal. In their article, Efendic and Van Zyl (2019) highlighted several serious issues present within SAJIP, ranging from low powered studies through to researchers actively engaging in questionable research practices (such as manipulating data). In order to ensure that the journal maintains its status as a premier psychological journal in Africa and to enhance its international reputation, structured initiatives aimed at managing these ‘lesser’ (Veldsman, 2019, p. 1) important issues need to be introduced. South African Journal of Industrial Psychology and its authorship have direct control over these matters, while they have limited influence over the toxic academic system, for example. As argued by Veldsman (2019, p. 5), SAJIP should find and implement the ‘20% research requirements that make 80% of the difference in research quality’; we believe that incorporating the TOP guidelines of Nosek et al. (2015) would do just that.

A further point of contention in relation to Veldsman’s (2019) rebuttal relates to the argument that IOP is growing in its irrelevance. He poses that IOP is a dying discipline that is losing momentum because of its inability to respond to timely issues within the current world of work. Although we agree that IOP would benefit from a greater speed in presenting evidence-based best practices to current challenges, we believe that his formative argument is fundamentally skewed by a perception that the role and function of an IOP is contextually bound to address problems only occurring in the present. This may be the case for some practitioners and scholars within the South African context; however, if one adopts a global perspective, it is clear that the IOP discipline is future orientated, and not just an applied discipline aimed at mere problem-solving (Aguinis, Ramani & Villamor, 2019; Bal et al., 2019). The value of the discipline is clearly highlighted if one looks at current research trends within the international context.

From an international perspective, research in both applied and fundamental IOP domains is making several significant advancements in science and practice. These advancements include human–robot interaction (Turja & Oksanen, 2019, the development and evaluation of artificial intelligence-based recruitment or selection and wellness systems (Abubakar, Behravesh, Rezapouraghdam, & Yildiz, 2019; Brougham & Haar, 2018; Upadhya & Khandelwal, 2018), and developing mechanisms to battle cybercrime (Power, 2018). Furthermore, the importance of IOP was acknowledged in 2015 by then President Barrack Obama who argued that it is one of the most important disciplines in the USA (Brown, 2015). Furthermore, labour statistics shows that the discipline is the fastest growing occupation in the USA (Bureau of Labour and Statistics, 2017). Therefore, IOP is not becoming irrelevant, but the demands for its services and insights stretch far beyond its traditional practice domains. However, what we take from Veldsman’s (2019) argument is that these current topics are not yet adequately represented in SAJIP and that the journal should invite contributions from those authors working in such modern domains. Through the publication of such topics, local scholars and students would get exposure to new developments in the field.

Furthermore, the reasoning behind Veldsman’s (2019) position about the outdated research paradigm from which IOPs draw and why a falsification paradigm would be more useful is unclear. Although the sentiment about responding to issues in a timely manner is understandable from a pragmatic perspective, the adoption of a ‘falsification paradigm’ does not, from our point of view. The falsifiability paradigm (developed by Popper, 1983) assumes the cliché that the exception redefines the norm. Here, any and all theories are accepted to be true unless disproven by observation (Laudan, 1983. For many reasons, this paradigm has been highly criticised. For example, Popper (1983) excluded legitimate science (such as mathematics) in his philosophical paradigm and willingly granted scientific merits and credibility to various forms of pseudoscience like astrology, parapsychology and demonology (Mahner, 2007. Laudan (1983, p. 121) stated that falsifiability ‘has the untoward consequence of countenancing as “scientific” every crank claim which makes ascertainably false assertions’. Given the current criticisms relating to psychology as a pseudoscience (Hecht et al., 2018; Tavris, 2014), as well as the ensuing arguments regarding its legitimacy as a science (Nosek et al., 2015), adopting such a paradigm would have severe negative consequences for the discipline, the profession and the public. We see adopting such a paradigm as the downfall of the discipline and a significant push of such into a ‘science’ of absolute irrelevance and obscurity. Resultantly, we strongly discourage the adoption of the falsification paradigm into the IOP discipline as it would lead to the ultimate downfall of the discipline.


The original paper by Efendic and Van Zyl (2019) highlighted the need to enhance the credibility and transparency of research published in SAJIP. The nine commentaries submitted in response to this article extend the debate as to the origin, nature and implication of the credibility crisis within the South African IOP discipline. These papers highlight that IOP is in dire need of reforms within the South African context and that a multi-pronged approach is required to effectively address such issues. Fundamentally, changes need to take place in not only how science is conducted, but also how it is constructed in the minds of scientists. Although SAJIP cannot respond to each of the challenges mentioned by the authors, it is suggested that it adopts a developmental stance as a custodian for the discipline within South Africa. South African Journal of Industrial Psychology needs to develop a clear strategy to not only enhance credibility of the IOP discipline, but also create a culture that celebrates openness, transparency and open science.


The authors would like to extend their appreciation to Professors Mark Bussin, Frans Cilliers, Melinde Coetzee, Freddie Crous, Crystal Hoole, David Maree, Kevin Murphy, Theo Veldsman and Doctor Alina Hernandez-Bark for their insightful comments and reflections on the future of the discipline. Furthermore, they would like to thank Doctor Emir Efendic for his insightful comments on the initial draft of this article.

Competing interests

The authors declare that they have no financial or personal relationships which may have inappropriately influenced them in writing this article.

Authors’ contributions

Both authors contributed equally to this work.

Ethical considerations

This article followed all ethical standards for carrying out research without direct contact with human or animal subjects.

Funding information

The page fee for this article, and all those in this special issue, was graciously sponsored by the Department of Industrial Psychology and People Management at the University of Johannesburg. The authors extend their heartfelt appreciation to both the institution and Prof. F. Crous for their investment in this initiative.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.


The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.


Abma, R. (2013). De publicatiefabriek. Over de betekenis van de affaire-Stapel. Tijdschrift voor Psychiatrie, 55(12), 1019–1020.

Abramson, J.L. (2005). Learning from Lying: Paradoxes of the Literary Mystification. University of Delaware Press.

Abubakar, A.M., Behravesh, E., Rezapouraghdam, H., & Yildiz, S.B. (2019). Applying artificial intelligence technique to predict knowledge hiding behavior. International Journal of Information Management, 49, 45–57. https://doi.org/10.1016/j.ijinfomgt.2019.02.006

Aguinis, H., & Solarino, A.M. (2019). Transparency and replicability in qualitative research: The case of interviews with elite informants. Strategic Management Journal, 40(1), 1291–1315. https://doi.org/10.1002/smj.3015

Aguinis, H., Ramani, R.S., & Villamor, I. (2019). The first 20 years of Organizational Research Methods: Trajectory, impact, and predictions for the future. Organizational Research Methods, 22(2), 463–489.

Allan, T.A., & Cribbie, R.A. (2013). Evaluating the equivalence of, or difference between, psychological treatments: An exploration of recent intervention studies. Canadian Journal of Behavioural Science, 45(4), 320. https://doi.org/10.1037/a0033357

Ardakan, M.A., Mirzaie, S.A., & Sheikhshoaei, F. (2011). The peer-review process for articles in Iran’s scientific journals. Journal of Scholarly Publishing, 42(2), 243–261. https://doi.org/10.3138/jsp.42.2.243

Arnedo, C.O., & Casellas-Grau, A. (2016). Vicarious or secondary post-traumatic growth: How are positive changes transmitted to significant others after experiencing a traumatic event? Conceptual discussion and clarification. Predictors of posttraumatic growth in significant others. Relational posttraumatic growth: Therapeutic value? In C.R. Martin, V.R. Preesy, & B. Vinood (Eds.). Comprehensive guide to post-traumatic stress disorders (pp. 1767–1782). Cham: Springer.

Auyang, S.Y. (1998). Foundations of complex-system theories: In economics, evolutionary biology, and statistical physics. London: Cambridge University Press.

Bakker, M., Van Dijk, A., & Wicherts, J.M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 534–554. https://doi.org/10.1177/1745691612459060

Bal, P.M., Doci, E., Lub, X., Van Rossenberg, Y.G.T., Nijs, S., Achnak, S., … van Zelst, M. (2019). Manifesto for the future of work and organizational psychology. European Journal of Work and Organizational Psychology, 28(3), 289–299. https://doi.org/10.1080/1359432X.2019.1602041

Berenbaum, M.R. (2019). Impact factor impacts on early-career scientist careers. National Academy of Sciences of the United States of America, 34(1), 16659–16662. https://doi.org/10.1073/pnas.1911911116

Biagioli, M., & Galison, P. (2014). Scientific authorship: Credit and intellectual property in science. Abingdon: Routledge.

Brechelmacher, A., Park, E., Ates, G., & Campbell, D.F. (2015). The rocky road to tenure–Career paths in academia. In T. Fumasoli, G. Goastellec, & B.M. Kehm (Eds.). Academic work and careers in Europe: Trends, challenges, perspectives (pp. 13–40). Cham: Springer.

Brougham, D., & Haar, J. (2018). Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization, 24(2), 239–257. https://doi.org/10.1017/jmo.2016.55

Brown, M.S. (2015, September 25). Obama uses behavioral science. Should you?. Forbes. Retrieved from www.forbes.com/sites/metabrown/2015/09/25/obama-uses-behavioral-science-should-you/

Bussin, M. (2019). A reply from a ‘pracademic’: It is not all mischief and there is scope to educate budding authors. SA Journal of Industrial Psychology, 45(0), a1726. https://doi.org/10.4102/sajip.v45i0.1726

Catling, J., Mason, V., & Upton, D. (2009). Quality is in the eye of the beholder? An evaluation of impact factors and perception of journal prestige in the UK. Scientometrics, 81(2), 333–345. https://doi.org/10.1007/s11192-009-2124-1

Chumwichan, S., & Siriparp, T. (2016). Influence of research training environment on research interest in graduate students. Procedia-Social and Behavioral Sciences, 217, 950–957. https://doi.org/10.1016/j.sbspro.2016.02.065

Cilliers, F. (2019). On the future of SAJIP. SA Journal of Industrial Psychology, 45(0), a1732.

Cilliers, F., & May, M. (2010). The popularisation of positive psychology as a defence against behavioural complexity in research and organisations. SA Journal of Industrial Psychology, 36(2), 1–10. https://doi.org/10.4102/sajip.v36i2.917

Coetzee, M. (2019). SA Journal of Industrial Psychology: Annual editorial overview 2019. SA Journal of Industrial Psychology, 45(1), a1741.

Coetzee, M., & Van Zyl, L.E. (2013). Advancing research in industrial and organisational psychology: A brief overview of 2013. SA Journal of Industrial Psychology, 39(1), 1–4. https://doi.org/10.4102/sajip.v39i1.1174

Coetzee, M., & Van Zyl, L.E. (2014). A review of a decade’s scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA Journal of Industrial Psychology, 40(1), 1–16. https://doi.org/10.4102/sajip.v40i1.1227

Cornelius, N. (2002). Building workplace equality: Ethics, diversity and inclusion. London: Cengage Learning EMEA.

Counsell, A., Cribbie, R.A., & Harlow, L. (2016). Increasing literacy in quantitative methods: The key to the future of Canadian psychology. Canadian Psychology/Psychologie Canadienne, 57(3), 193. https://doi.org/10.1037/cap0000056

Crous, F. (2019). Indeterminateness in industrial and organisation psychological research: A root metaphor analysis. SA Journal of Industrial Psychology, 45(0), a1756.

D’Andrea, R, & O’Dwyer, J.P. (2017). Can editors save peer review from peer reviewers? PLoS One, 12(10), e0186111. https://doi.org/10.1371/journal.pone.0186111

Delgado, A.F., Garretson, G., & Delgado, A.F. (2019). The language of peer review reports on articles published in the BMJ, 2014–2017: An observational study. Scientometrics, 120(3), 1225–1235.

Drucker, P. (1954). The practice of management. Abingdon: Routledge.

Efendic, E., & Van Zyl, L.E. (2019). On reproducibility and replicability: Arguing for open science practices and methodological improvements at the South African Journal of Industrial Psychology. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 45(0), a1607.

Elliot, R. (2005). Who owns scientific data? The impact of intellectual property rights on the scientific publication chain. Learned Publishing, 18(2), 91–94. https://doi.org/10.1087/0953151053584984

Fehder, D.C., Murray, F., & Stern, S. (2014). Intellectual property rights and the evolution of scientific journals as knowledge platforms. International Journal of Industrial Organization, 36, 83–94. https://doi.org/10.1016/j.ijindorg.2014.08.002

Florens, J.P. (2019). Elements of Bayesian statistics. Abingdon: Routledge.

Friedman, H.L., & Brown, N.J. (2018). Implications of debunking the ‘Critical Positivity Ratio’ for humanistic psychology: Introduction to special issue. Journal of Humanistic Psychology, 58(3), 239–261. https://doi.org/10.1177/0022167818762227

Gelfand, M.J., Aycan, Z., Erez, M., & Leung, K. (2017). Cross-cultural industrial organizational psychology and organizational behavior: A hundred-year journey. Journal of Applied Psychology, 102(3), 514. https://doi.org/10.1037/apl0000186

Grand, J.A., Rogelberg, S.G., Allen, T.D., Landis, R.S., Reynolds, D.H., Scott, J.C., … Truxillo, D.M. (2018). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology, 11(1), 4–42. https://doi.org/10.1017/iop.2017.55

Hales, A.H. (2016). Does the conclusion follow from the evidence? Recommendations for improving research. Journal of Experimental Social Psychology, 66, 39–46. https://doi.org/10.1016/j.jesp.2015.09.011

Hecht, D.K., Lobato, E., Zimmerman, C., Blanco, F., Matute, H., Simonton, D.K., … Ball, D. (2018). Pseudoscience: The conspiracy against science. Boston, MA: MIT Press.

Hernandez-Bark, A.S. (2019). The replicability crisis as chance for psychological research and SAJIP. SA Journal of Industrial Psychology, 45(0), a1724.

Hoole, C. (2019). Avoiding the elephant in the room: The real reasons behind our research crisis. SA Journal of Industrial Psychology, 45(0), a1723.

Jorgensen-Graupner, L.I., & Van Zyl, L.E. (2019). Inspiring growth: A counselling framework for industrial psychology practitioners. In Positive psychological intervention design and protocols for multi-cultural contexts (pp. 381–404). Cham: Springer.

Keats, P.A. (2009). Multiple text analysis in narrative research: Visual, written, and spoken stories of experience. Qualitative Research, 9(2), 181–195. https://doi.org/10.1177/1468794108099320

Köhler, T., & Cortina, J.M. (2019). Play it again, Sam! An analysis of constructive replication in the organizational sciences. Journal of Management. https://doi.org/10.1177/0149206319843985

Konijn, E.A., van de Schoot, R., Winter, S.D., & Ferguson, C.J. (2015). Possible solution to publication bias through Bayesian statistics, including proper null hypothesis testing. Communication Methods and Measures, 9(4), 280–302. https://doi.org/10.1080/19312458.2015.1096332

Laudan, L. (1983). The demise of the demarcation problem. In R.S. Cohen & L. Laudan (Eds.), Physics, philosophy and psychoanalysis (pp. 111–127). Dordrecht: Springer.

Loewen, S., Lavolette, E., Spino, L.A., Papi, M., Schmidtke, J., Sterling, S., & Wolff, D. (2014). Statistical literacy among applied linguists and second language acquisition researchers. TESOL Quarterly, 48(2), 360–388. https://doi.org/10.1002/tesq.128

Mahner, M. (2007). Demarcating Science from Non-Science. In T. Kuipers (Ed.), Handbook of the Philosophy of Science: General Philosophy of Science – Focal Issues (pp. 515–575). Amsterdam: Elsevier.

Manning, S.F., De Terte, I., & Stephens, C. (2015). Vicarious posttraumatic growth: A systematic literature review. International Journal of Wellbeing, 5(2), 125–139. https://doi.org/10.5502/ijw.v5i2.8

Maree, D.J.F. (2019). Burning the straw man: What exactly is psychological science? SA Journal of Industrial Psychology, 45(0), a1731.

Marshall, M.N., Shekelle, P.G., Leatherman, S., & Brook, R.H. (2000). The public release of performance data: What do we expect to gain? A review of the evidence. JAMA, 283(14), 1866–1874. https://doi.org/10.1001/jama.283.14.1866

Meyer, K.E., Van Witteloostuijn, A., & Beugelsdijk, S. (2017). What’s in a p? Reassessing best practices for conducting and reporting hypothesis-testing research. Journal of International Business Studies, 48(5), 535–551. https://doi.org/10.1057/s41267-017-0078-8

Murphy, K.R. (2019). Reducing our dependence on null hypothesis testing: A key to enhance the reproducibility and credibility of our science. SA Journal of Industrial Psychology, 45(0), a1717. https://doi.org/10.4102/sajip.v45i0.1717

Murtonen, M. (2015). University students’ understanding of the concepts empirical, theoretical, qualitative and quantitative research. Teaching in Higher Education, 20(7), 684–698. https://doi.org/10.1080/13562517.2015.1072152

National Development Plan. (2013). NDP 2030. Retrieved from http://www.gov.za/issues/national_development_plan_2030.

Nicholas, D., Rodríguez-Bravo, B., Watkinson, A., Boukacem-Zeghmouri, C., Herman, E., Xu, J., … Świgoń, M. (2017). Early career researchers and their publishing and authorship practices. Learned Publishing, 30(3), 205–217. https://doi.org/10.1002/leap.1102

Nickerson, C.A. (2018). There is no empirical evidence for critical positivity ratios: Comment on Fredrickson (2013). Journal of Humanistic Psychology, 58(3), 284–312. https://doi.org/10.1177/0022167817740468

Nosek, B.A., Alter, G., Banks, G.C., Borsboom, D., Bowman, S.D., Breckler, S.J., … Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374

Offutt, J. (2018). What is the value of the peer-reviewing system?. Software Testing, Verification and Reliability, 28(5), e1687. https://doi.org/10.1002/stvr.1687

O’Neil, S., & Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: A critical review. SA Journal of Industrial Psychology, 42(1), 1–16. https://doi.org/10.4102/sajip.v42i1.1350

Patton, M.Q. (1999). Enhancing the quality and credibility of qualitative analysis. Health Services Research, 34(5 Pt 2), 1189.

Peterson, C., Park, N., Pole, N., D’Andrea, W., & Seligman, M.E. (2008). Strengths of character and posttraumatic growth. Journal of Traumatic Stress: Official Publication of the International Society for Traumatic Stress Studies, 21(2), 214–217. https://doi.org/10.1002/jts.20332

Plakoyiannaki, M.E., Wei, T., & Prashantham, S. (2019). Rethinking qualitative scholarship in emerging markets: Researching, theorizing and reporting. Management and Organization Review, 15(2), 1–18. https://doi.org/10.1017/mor.2019.27

Ponterotto, J.G. (2010). Qualitative research in multicultural psychology: Philosophical underpinnings, popular approaches, and ethical considerations. Cultural Diversity and Ethnic Minority Psychology, 16(4), 581. https://doi.org/10.1037/a0012051

Popper, K. (1983). Realism and the aim of science: From the postscript to the logic of scientific discovery. London: Routledge.

Porter, E.J. (1989). The qualitative-quantitative dualism. Image: The Journal of Nursing Scholarship, 21(2), 98–102. https://doi.org/10.1111/j.1547-5069.1989.tb00107.x

Power, A.D. (Ed.). (2018). Cyberpsychology and society: Current perspectives. Abingdon: Routledge.

Raubenheimer, I. (1994). Die tydskrif vir bedryfsielkunde 20 jaar in bedryf. SA Journal of Industrial Psychology, 20(3), 22–24. https://doi.org/10.4102/sajip.v20i3.579

Sharpe, D. (2013). Why the resistance to statistical innovations? Bridging the communication gap. Psychological Methods, 18(4), 572. https://doi.org/10.1037/a0034177

Somoza-Fernández, M., Rodríguez-Gairín, J.M., & Urbano, C. (2018). Journal coverage of the emerging sources citation index. Learned Publishing, 31(3), 199–204. https://doi.org/10.1002/leap.1160

South African Journal of Industrial Psychology (SAJIP). (2019). Journal information. Retrieved from https://sajip.co.za/index.php/sajip/pages/view/journal-information

Tavris, C. (2014). Science and pseudoscience in clinical psychology. New York, NY: Guilford Publications.

Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Thousand Oaks, CA: Sage.

Tedeschi, R. G., Shakespeare-Finch, J., Taku, K., & Calhoun, L.G. (2018). Posttraumatic growth: Theory, research, and applications. s.l.: Routledge (in press).

Turja, T., & Oksanen, A. (2019). Robot Acceptance at Work: A Multilevel Analysis Based on 27 EU Countries. International Journal of Social Robotics, 1–11.

United States Bureau of Labor Statistics (2017). Fastest growing occupations: Occupational outlook handbook: U.S. Bureau of Labor Statistics. Retrieved from http://www.bls.gov/ooh/fastest-growing.htm

Upadhya, A.K., & Khandelwal, K. (2018). Applying artificial intelligence: Implications for recruitment. Strategic HR Review, 17(5), 255–258. https://doi.org/10.1108/SHR-07-2018-0051

Van de Schoot, R., Kaplan, D., Denissen, J., Asendorpf, J.B., Neyer, F.J., & Van Aken, M.A. (2014). A gentle introduction to Bayesian analysis: Applications to developmental research. Child Development, 85(3), 842–860. https://doi.org/10.1111/cdev.12169

Van der Westhuizen, S. (2015). Reliability and validity of the attitude towards research scale for a sample of industrial psychology students. South African Journal of Psychology, 45(3), 386–396. https://doi.org/10.1177/0081246315576266

Van Zyl, L.E., Deacon, E., & Rothmann, S. (2010). Towards happiness: Experiences of work-role fit, meaningfulness and work engagement of industrial/organisational psychologists in South Africa. SA Journal of Industrial Psychology, 36(1), 1–10. https://doi.org/10.4102/sajip.v36i1.890

Van Zyl, L.E., Nel, E., Stander, M.W., & Rothmann, S. (2016). Conceptualising the professional identity of industrial or organisational psychologists within the South African context. SA Journal of Industrial Psychology, 42(1), 1–13. https://doi.org/10.4102/sajip.v42i1.1379

Wingfield, B. (2018). The peer review system has flaws. But it’s still a barrier to bad science. The Conversation. Retrieved from https://theconversation.com/the-peer-review-system-has-flaws-but-its-still-a-barrier-to-bad-science-84223

Witkowski, T. (2015). Psychology gone wrong: The dark sides of science and therapy. New York, NY: Universal-Publishers.


1. For a detailed description of the TOP guidelines, see Table 2 in Efendic and Van Zyl (2019).

2. Note that this does not imply the associated analytics. The focus here is on the research methodology.

3. Based on both personal experience as editor in SAJIP as well as echoing the concerns of the Editorial Board at the 2014 board meeting.

4. Young academics and policymakers are encouraged to see Hoole (2019).

5. A second argument against this point is presented in the response to Article 6.

Crossref Citations

No related citations found.