About the Author(s)

Peter T. Ayuk Email symbol
Department of Business Management, Milpark Business School, Milpark Education Pty Ltd, South Africa

Gerrie J. Jacobs symbol
Department of Mathematics, Science & Technology; Embury Institute for Higher Education, Midrand Waterfall Campus, Gauteng Province, South Africa


Ayuk, P.T., & Jacobs, G.J. (2018). Developing a measure for student perspectives on institutional effectiveness in higher education. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 44(0), a1485. https://doi.org/10.4102/sajip.v44i0.1485

Original Research

Developing a measure for student perspectives on institutional effectiveness in higher education

Peter T. Ayuk, Gerrie J. Jacobs

Received: 24 Aug. 2017; Accepted: 18 Feb. 2018; Published: 26 Apr. 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Orientation: This study outlines institutional effectiveness (IE) in higher education (HE) and interrogates its underlying elements from a student perspective. Following a review of contemporary perspectives on student educational outcomes, the study identifies and explores the importance of four pertinent indicators of IE in the context of a South African (SA) higher education institution (HEI).

Research purpose: This study aimed to explore the structural validity and reliability of the Student Educational Outcomes Effectiveness Questionnaire (SEEQ), administered to students at an SA HEI, collecting data on their perceptions of IE.

Motivation for the study: Institutional effectiveness is a contested concept in HE and several approaches to define it, using various sets of underpinning elements, can be found. The conceptualisation and measuring of IE within the SA HE sector is a hugely neglected area of research. This study therefore attempted to delineate and to gauge IE, utilising the perceptions and preferences of students at an SA HEI.

Research design, approach and method: Data for this study were collected using a self-selection sample (N = 807) of students from four schools at the selected HEI. Reliability and exploratory factor analyses were performed to explore the internal consistency and structural validity of the above-mentioned SEEQ.

Main findings: The reliability of SEEQ is deemed to be acceptable and the validity of the four theoretical constructs (or dimensions) hypothesised in respect of IE from a student perspective were supported.

Practical/managerial implications: Preliminary empirical evidence suggests that SEEQ could be employed in a cautious manner by HEIs (especially in SA), with a view to gauge IE, as well as to promoting the scholarship and management of institutional performance and student success.

Contribution or value-add: This article presents a multidimensional approach to the depiction and measurement of IE from a student perspective. It makes a handy initial contribution to a grossly under-researched phenomenon in the SA HE sector.


What makes a higher education institution effective?

More than three decades ago, Cameron (1986, p. 539) posed the question: What makes an organisation ‘excellent, of high quality, productive, efficient, healthy, or possessing vitality?’ Individually and jointly, all of these aspects serve as proxies for the concept organisational effectiveness, and several contributors to effectiveness in organisations have over the years evolved via numerous inquiries (cf. Ashraf & Kadir, 2012; Cameron, 1978, 1986; Kwan & Walker, 2003; Quinn & Rohrbaugh, 1981; Roland, 2011; Shilbury & Moore, 2006). The term ‘organisation’ has for many years not been associated with the higher education (HE) sector, with institution the preferred label. Tucker and Bryan’s (1988, p. 4) view of academic management as ‘an art; not a science, because a college or a university is very unlike the standard corporation or business’ had and still has wide backing (Drew, 2006; Mintzberg 2004; Schmidtlein & Milton, 1989; Scott, Bell, Coates, & Grebennikov, 2010). However, the upswing towards managerialism and accountability has led many scholars to an opposing point of view, namely that a higher education institution (HEI) is and should increasingly be managed like a business (Davis, Jansen van Rensburg, & Venter, 2016; Deem & Brehony, 2005; Deem, Hillyard, & Reed, 2007; Kolsaker, 2008). These contrasting opinions affirm Cameron’s (1986, p. 540) argument that ‘As the metaphor describing an organization changes, so does the definition or appropriate model of organizational effectiveness’. The concept institutional effectiveness (IE) gradually became HE’s take on organisational effectiveness.

Institutional effectiveness typically asks (Leimer, 2011; Roland, 2011; Volkwein, 2007) the following question: how well is a higher education institution (HEI) fulfilling its purpose? Answers to three questions are typically sought when monitoring whether a HEI is indeed making strides towards accomplishing its purpose (Boehmer, 2006; Welsh & Metcalf, 2003), namely: (1) to what extent the institutional mission is being conquered, (2) whether progress is made in attaining the goals and objectives of the institution’s strategic plan and (3) whether the stated educational outcomes of an institution (at programme, departmental and school or faculty levels) are being achieved. Volkwein (2011, p. 5) highlights a pertinent development of the post-2010 era in HE, namely that: ‘accountability and accreditation have shifted institutions’ emphases to goal attainment, program evaluation, and institutional effectiveness – with especially heightened emphasis on student learning outcomes’ (authors’ emphasis).

The assessment of student learning outcomes seems to have become a key indicator of IE in the current HE context. The US Middle States Commission on Higher Education (2005, p. 3) aptly captures the essence of the IE challenge facing HE institutions via two questions, namely: ‘As an institutional community, how well are we collectively doing what we say we are doing?’ and, in particular, ‘How do we support student learning, a fundamental aspect of institutional effectiveness?’

Purpose of the study and research objectives

There is no single best way of measuring the effectiveness of a HEI. This fundamental reality stems from the recognition that, like most social systems, HEIs have multiple stakeholders, who might hold different, sometimes competing conceptions of what constitutes effectiveness. A paradox exists in the role of students as stakeholders in education. While many argue that educational institutions exist primarily to serve the interests of students (Ashraf & Kadir, 2012; Bitzer, 2003; Ronco & Brown, 2000), evidence of the vulnerability of students in the power relationship with regard to other primary stakeholders within HEIs persists (Glasser & Powers, 2011; Seale, Gibson, Haynes, & Potter, 2015).

Although there has been a recent surge in measurement research relevant to student perspectives of IE, the resultant instruments are either too narrow in focus (often limited to assessing one or two educational outcomes) or remain untested in the South African context. Some useful outputs in this regard are found in Díaz and De León (2016) for measuring dropout intentions, an indicator of student retention (SR), Maddox and Nicholson (2014) on student satisfaction and Soldner, Smither, Parsons and Peek (2016) for measuring student persistence, a proxy for SR. To attain a fuller understanding of how students value the HEIs in which they study, an instrument that can capture multiple dimensions of IE and that is developed and tested in the local context, is required. Although this article cannot claim to fill this gap, it strives to make a contribution in that regard.

By upholding the primacy of student interests as the focus of IE as a point of departure, this study has, as primary purpose (or aim), to explore the reliability and construct validity of an instrument used to measure and report on students’ perceptions of IE at a HEI. In support of this aim, the article pursues the following objectives:

  • to review theoretical perspectives on pertinent indicators or dimensions of IE as regarded by students
  • to explore and explain the reliability and factor structure of an instrument used to measure indicators of IE identified in objective 1 above
  • to outline the opportunities, which the proposed instrument might present for further research.

Literature review

The importance of the student perspective of institutional effectiveness

One way of framing IE is the strategic constituency approach (SCA), which originated as the participant satisfaction approach (Cameron, 1978; Quinn & Rohrbaugh, 1983). Rooted in stakeholder theory (Donaldson & Preston, 1995; Freeman, 1984), SCA raises the question about whose interests should be privileged in defining effectiveness criteria. At the core of this approach is the recognition that the various stakeholders of an institution might value different outcomes, and therefore, diverse views must be considered in defining relevant IE criteria (Balduck & Buelens, 2008). As Ashraf and Kadir (2012) appropriately argue, the stakeholder approach to effectiveness integrates the concept of social responsibility and might be readily attractive for application in HE environments where effectiveness often has to be adjudged through the eyes of multiple constituents, both internal and external to the institution.

In the last three decades, the idea of the student voice, which explores the importance of students’ perspectives in diverse aspects of educational provision, has gained prominence in HE research (Leckey & Neill, 2001; Levin, 1998; Subramanian, Anderson, Morgaine, & Thomson, 2013). Subramanian et al. (2013, p. 137) identify four ways in which students’ insights might improve the educational process, encapsulating ideas that have been articulated by several other authors. These include (1) providing formative feedback on pedagogical practices (Harvey, 2003; Seale, 2010), (2) being a key element of the quality assurance strategy (Estes, 2004; Leckey & Neill, 2001; Moufahim & Lim, 2015), (3) promoting engagement and reflection on teaching and learning activities (Cook-Sather, 2006) and (4) advancing democratic participation and empowerment (Cohn, 2012; Del Prato, 2013; Levin, 1998).

The student voice must, however, be considered with care. One major critique is that the student voice is in reality not a lone or unified voice but one that is constructed from a mosaic of diverse, sometimes conflicting voices (McLeod, 2011). Another critique is the question whether students are adequately equipped to know and articulate what is best for their own learning or related outcomes (Slade & McConville, 2006).

In the end, these critiques in the authors’ views do not necessarily devalue the importance of the student voice, but rather, urge that it is used more habitually, although responsibly.

Student educational outcomes

Almost five decades ago, Kohlberg and Mayer (1972) elegantly captured the importance of clearly identifying the most salient goals of education thus:

The most important issue confronting educators and educational theorists is the choice of ends for the educational process. Without clear and rational educational goals, it becomes impossible to decide which educational programs achieve objectives of general import and which teach incidental facts and attitudes of dubious worth. (p. 449)

These sentiments continue to be echoed by contemporary HE scholars (Etzel & Nagy, 2016; Judson & Taylor, 2014). Inspired by Volkwein’s (2011) vital observation regarding the primacy of student learning outcomes in defining IE, a search of the relevant literature for what might constitute such outcomes revealed four key themes. This review privileges a sociological perspective (Díaz & De León, 2016), focusing on the institutional determinants of each outcome as they are more likely to signify IE and therefore highlight critical areas for intervention (Hilton & Gray, 2017), as opposed to other confounding or mediating factors. Student educational outcomes (SEOs) are used in this study to represent a set of four distinct but related outcomes, which have been reported by other scholars to various degrees. For instance, Etzel and Nagy’s (2016) notion of academic success encompasses three components namely, academic performance, academic satisfaction and major change intention (a proxy for SR), while DeShields, Kara and Kaynak (2005) focus on student satisfaction and retention. Additionally, there is a growing body of literature on the importance of graduate employability (EM) (Archer & Chetty, 2013; Dacre Pool & Sewell, 2007; O’Leary, 2013; Yorke, 2010) as an indicator of IE. A brief review of each outcome now follows.

Academic achievement

As an educational outcome, student’s academic achievement (AA) or performance can be indicated by objective measures such as term marks and proportion of courses passed per semester (Gibbs, 2010; Mega, Ronconi, & De Beni, 2014) or subjectively, in terms of student perceptions of the gains in knowledge, skills and attitudes attributable to the engagement with an institution or learning programme (Astin, 1984; Guay, Ratelle, Roy, & Litalien, 2010). Key institutional factors attributable to AA have been found to include the classroom emotional climate and student engagement (Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Reyes, Brackett, Rivers, White, & Savoley, 2012), extrinsic motivation (Guay et al., 2010) and institutional processes and presage variables such as staff:student ratios and the quality of teaching staff (Gibbs, 2010).

Educational satisfaction

Educational satisfaction (ES) emphasises the student’s role as ‘consumer’ of a HEI’s educational offerings and service (Van Schalkwyk & Steenkamp, 2014; Woodall, Hiller, & Resnick, 2014) and relates to the student’s total experience of both the academic and supporting elements of what a HEI typically offers (Negricea, Edu, & Avram, 2014). Pertinent institutional factors found to influence ES include institutional values such as integrity, diversity and a student-centred approach, and giving effect to these values, thus showing that they are more than just ‘empty promises’ (Kuh et al., 2014; Negricea et al., 2014), as well as the perceived quality of educational outcomes and the levels of academic, administrative and technical support (Maddox & Nicholson, 2014; Temple, Callender, Grove, & Kersh, 2014). Additionally, Khiat (2013) reports that the difference in levels of satisfaction reported by students might lie in the student type – that is, traditional (i.e. typical, full-time) versus non-traditional (i.e. atypical, part-time) students, as they might have diverse expectations of the institution.

Student retention

Student retention denotes an institution’s capacity to retain its students and is both a function of persistence, that is, the propensity of students to continue their studies (Schreiner & Nelson, 2013; Soldner et al., 2016), as well as their loyalty (Fares, Achour, & Kachkar, 2013; Vianden & Barlow, 2014) to the institution. As Fontaine (2014) elegantly explains, students enter a HEI with certain needs and only institutions that understand and can meet such needs retain students until the successful completion of their courses. Angulo-Ruiz and Pergelova’s (2013) SR model provides empirical support for three hitherto known institutional determinants of SR (cf. Fares et al., 2013; Schreiber, Luescher-Mamashela, & Moja, 2014; Von Treuer & Marr, 2013) – (1) teaching and learning effectiveness, (2) peer interaction and (3) academic and social integration – and demonstrates empirical support for a fourth, namely, institutional image. Institutional image denotes the perceived prestige attributable to an institution and is thought to promote supportive attitudes such as pride and trust (Sung & Yang, 2008), which, in the context of a HEI, are found to have a stronger influence on student satisfaction than on service quality (Brown & Mazzarol, 2009).


As an educational outcome, EM denotes an institution’s capacity to develop the students’ knowledge, skills and attributes for employment and career success after graduation (Kinash, Crane, Schulz, Dowling, & Knight, 2014). The possibility of accessing employment and building meaningful careers are arguably the most practical reasons for seeking a HE qualification (Yorke, 2010). Key institutional determinants of EM include the content knowledge, skills and attitudes which students develop through their educational experience (Dacre Pool, Qualter, & Sewell, 2014; Yorke, 2010). Other relevant factors might include the institution’s capacity to provide opportunities for networking, promoting internalisation and an entrepreneurial orientation (Artess, Hooley, & Mellors-Bourne, 2017). Additionally, and especially for students without any prior work experience, the quality of work-integrated learning and work readiness support programmes is of prime importance in easing the path from college to the workplace (Pegg, Waldock, Hendy-Isaac, & Lawton, 2012).

Drawing from the aforementioned sources, the institutional determinants of the four elements of IE (as signified by ES, AA, SR and EM) outlined above are summarised in Figure 1.

FIGURE 1: Institutional determinants of four constructs of institutional effectiveness.

The above dimensions of IE and their corresponding determinants form the basis on which items that constitute the instrument used to measure IE in this study were conceptualised and formulated. The instrument was captioned the Student Educational Outcomes Effectiveness Questionnaire (SEEQ). The subsequent part of the article focuses on the exploration of the reliability and validity of SEEQ, a key objective of this article.

Research design

Research approach

The overarching paradigm of pragmatism (Onwuegbuzie & Leech, 2005; Creswell & Plano Clark, 2011) was deemed to be most appropriate in addressing a current real-life issue like IE of HEIs. The empirical component, which constitutes the crux of the article, is post-positivist in nature (Jacobs & Jacobs, 2014, p. 37), assuming that an exterior reality exists and that this reality cannot be known fully, but that it could be measured (at least partially).

Research method

As this study aims to explore the reliability and construct validity of an instrument, it follows methods widely applicable in the large and growing body of research focusing on measurement development and validation (cf. Bagozzi, Yi, & Phillips, 1991; Delcourt, Gremler, van Riel, & van Birgelen, 2016; Mackenzie, Podsakoff, & Podsakoff, 2011). The scope of this article is, however, limited to the (initial) measurement development phase, which demonstrates the integrity of the instrument and facilitates further theory refinement (Henson & Roberts, 2006). Drawing from prior research, institutional variables thought to influence IE (from a student’s perspective) were identified and grouped into theoretical constructs. Data were collected through a cross-sectional survey involving a representative sample of the student population of a HEI and used to examine the reliability and validity of the theorised constructs.

Research participants

The participants were drawn from four schools (i.e. academic divisions) within a HEI based in South Africa. Within each school, all students who had completed at least one semester were invited to participate in the survey and the eventual research sample was constituted through a self-selection sampling process (Creswell, 2014; Saunders, Lewis, & Thornhill, 2012). Thirty-nine students participated in a pilot survey, which aimed at assessing whether the questionnaire items effectively communicated the researchers’ intentions. The results of the pilot were then used to enhance the structure and clarity of the instrument that was used in the actual survey.

A typical participant would be a female (almost 60%), block release (more than one in three) or distance learning (more than one in four) student who was enrolled directly (more than 80%) into either school A, C or D (90% of the sample), aged 34 years or less (almost 7 out of 10) and had some work experience (more than 17 in 20). Students studying via the block release mode of delivery attend contact sessions in time blocks of typically one week every second month. Just under 20% of the sample were full-time students studying via contact learning; a proportional representation of the entire institutional profile, where 1897 of 11 158 (i.e. 17%) of registered students are full-time contact learning students (Table 1).

TABLE 1: Demographic profile of the participants.
Measuring instrument

Initially, SEEQ contained 26 items rated on a Likert scale with five options (i.e. 1 = strongly disagree; 2 = disagree; 3 = neutral; 4 = agree; 5 = strongly agree) distributed across the four dimensions as outlined in Table 2.

TABLE 2: Initial structure of the questionnaire.

The formulation of the items was informed mainly by the authors’ conception of the institutional determinants of each of the four dimensions deemed to indicate IE, based on prior literature (Figure 1). The face validity and content validity of the instrument were verified through expert opinions (Leedy & Ormrod, 2014, p. 91). The reliability (internal consistency) and construct validity of the research instrument, which form the core of this article, are explored and explained subsequently.

Research procedure and ethical considerations

This study involved a survey, which was administered both via online and paper-based media. All continuing students (of the four participating schools) who had completed at least one semester were invited to participate, via email prompts to complete the online survey. In addition, students in contact sessions were given the option to (manually) complete printed questionnaires. The data were imported (i.e. online) or typed into SPSS version 23 for subsequent analysis. Duplicates were identified through the student numbers, and removed. Ultimately, 807 usable questionnaires or records resulted from the survey.

Ethical clearance for the study was obtained from the Research Ethics Committee of the Faculty of Education, University of Johannesburg, because the original inquiry forms part of one of the author’s doctoral research. In keeping with the applicable ethical requirements, the consent of the participating institution and the individual participants was sought through formal letters explaining the objectives of the study, the input required from them, while areas of potential risks or conflicts were highlighted and agreed upon. The confidentiality, anonymity and integrity of the institution and individual participants were maintained at all times. Participating individuals were informed of their rights to withdraw from the study at any time if they felt that the ethical standards were compromised in any way.

Statistical analysis

The reliability of the SEEQ was explored by assessing the internal consistency of items within each construct (Leedy & Ormrod, 2014), which is statistically indicated by the Cronbach’s alpha coefficient (α), a widely used indicator of reliability (DeVellis, 2003; Saunders et al., 2012). Internal consistency was measured at three stages: firstly involving the items as initially conceptualised per construct (i.e. theoretical constructs), secondly after the items were reassigned according to their highest factor coefficients following exploratory factor analysis (EFA) (empirical constructs) and thirdly after items with factor loadings below 0.4 were excluded, as such items are regarded as trivial to the determination of the construct (Pallant, 2007, p. 192).

Construct validity was examined by interrogating the underlying factor structure of the instrument through EFA, a technique used in such validation studies (Chew, Kueh, & Aziz, 2017; Schaap & Kekana, 2016). Three types of factor analysis are typically widely used, namely EFA and confirmatory factor analysis (CFA), as well as hybrids of these two, which also gave rise to structural equation modelling (SEM) (Cabrera-Nguyen, 2010; Chew et al., 2017; Schaap & Kekana, 2016). As argued by Henson and Roberts (2006), EFA is an ‘exploratory method used to generate theory’, while ‘(CFA) is generally used to test theory when the analyst has sufficiently strong rationale regarding what factors should be in the data and what variables should define each factor’ (p. 395). As the scope of this article is limited to exploring the factor structure of a newly constituted data collection instrument, EFA was deemed to be more appropriate.

To assess the suitability of the data for EFA, two preliminary tests were conducted, namely the Kaiser–Meyer–Olkin (KMO) measure of sample adequacy and Bartlett’s test of sphericity. Then, principal axis factoring (PAF) involving Oblimin rotation with Kaiser normalisation (Kaiser, 1960; Pallant, 2007) was used as the extraction method, as it more explicitly focuses on the latent factors (in comparison to say, principal components analysis) (Gaskin & Happell, 2014; Henson & Roberts, 2006). The determination of the optimal number of factors to retain was done using parallel analysis (Horn, 1965), which is growing in stature as the ‘most accurate’ technique in this category, especially in social science research (Henson & Roberts, 2006, p. 399; Naidoo, Schaap, & Vermeulen, 2014, p. 12; Pallant, 2007, p. 183).


Exploratory factor analysis screening tests

The suitability tests yielded an index of 0.939 for the KMO test (Kaiser, 1960) and a significant result (p = 0) for Bartlett’s test of sphericity (Bartlett, 1954), both bettering the minimum requirements for conducting factor analysis, namely an index of 0.6 (for KMO) and p < 0.5 for Bartlett’s test (Pallant, 2007, p. 181; Tabachnick & Fidell, 2007). The data were therefore deemed suitable for factor analysis.

Results of exploratory factor analysis

As shown in Table 3, an application of PAF on the initial data set (based on the 26-item questionnaire) suggested that four factors met Kaiser’s (1960) criterion of eigenvalues greater than 1 and may therefore be retained for further analysis (Velicer, 1976, p. 322).

TABLE 3: Initial factor extraction: Total variance explained.

Items 7–25 have been omitted from Table 3 for the sake of brevity. The results show that together the four principal factors (eigenvalue > 1) collectively explain just below 60% of total variability in student perceptions of the quality of the educational outcomes they derive from the institution.

Upon inspection of the initial pattern and structure matrices, items that did not load substantially (i.e. factor coefficient less than 0.4) (Pallant, 2007, p. 192) were removed, and EFA was rerun. The removal of low loading items improved the total variance explained by almost 2% to 61.78% (see Table 4), which exceeds the required threshold of 60% for acceptable construct validity (Field, 2013, p. 677) and more than the mean value of just over 52% reported by Henson and Roberts (2006, p. 402) from a review of 43 articles in applied psychology research.

TABLE 4: Total variance explained after dropping items with low factor coefficients: Total variance explained.

To further interrogate the number of permissible factors, parallel analysis (Horn, 1965) was performed which also yielded four factors with eigenvalues greater than the corresponding criterion values for a randomly generated data matrix of comparable size (Choi, Fuqua, & Griffin, 2001; Horn, 1965), thus further supporting the appropriateness of a four-factor solution, as shown in Table 5.

TABLE 5: Parallel analysis for student educational outcomes effectiveness questionnaire.

To facilitate the interpretation of the four factors, Oblimin rotation (with Kaiser normalisation) was performed. The results (see Table 6) show a clear four-factor structure solution, with each item loading strongly on only one factor and each factor having the recommended minimum of three variables (Pallant, 2007, p. 192). Table 6 shows the combined pattern and structure matrix after low loading items were removed. As all but two items loaded strongly on the corresponding factors as had been conceptualised in the design of the instrument, the factors were intuitively labelled according to the constructs as initially theorised.

TABLE 6: Pattern and structure matrix for student educational outcomes effectiveness questionnaire after dropping low loading items.

Two items (aa5 and aa6) initially thought to be indicative of AA were instead found to be more indicative of ES. Further reflection on the formulation and meanings of these two items suggests that they are indeed more indicative of the general quality of academic provision and therefore more likely to influence satisfaction than AA.

Weak to moderate positive correlations were observed between the four factors (see Table 7), thus signalling that although the four constructs may be regarded as distinct scales, they could be mutually reinforcing. For example, the strongest correlation is observed between ES and SR (r = 0.591), which supports the intuition that when students are happy with the quality of educational provision, they would be more loyal to the institution and strive to retain their place at the institution.

TABLE 7: Factor correlation matrix for student educational outcomes effectiveness questionnaire.

To summarise, the results highlight the possibility that each of the four constructs represents a unique indicator of IE in the students’ minds, thus validating the initial students’ educational outcomes dimensions as were conceptualised at the onset of the study.

Internal consistency (reliability) of Student Educational Outcomes Effectiveness Questionnaire

Table 8 shows that in all three phases, Cronbach’s alpha values exceeded the admissible threshold of 0.7 (DeVellis, 2003), thus signifying that items within each cluster reasonably measured the same construct, that is, they are unidimensional. No significant changes in alpha were observed over the three phases. Although this might suggest that dropping the three items with factor pattern coefficients below 0.4 was redundant, it yielded an increase in the total variance explained (see Table 4). All four constructs had the recommended minimum of three items per construct (Pallant, 2007, p. 192).

TABLE 8: Cronbach’s alpha (α) for student educational outcomes effectiveness questionnaire.


The aim of this article was to explore the reliability and validity of an instrument used to measure students’ perceptions of IE in the context of a private HEI in South Africa.

Outline of the results

In line with its aim, the article focuses on demonstrating the reliability and validity of the proposed instrument as a tool for estimating the IE of a HEI in terms of students’ perceptions of ES, AA, EM and SR. Firstly, the research results demonstrate the reliability of the instrument, with Cronbach’s alpha coefficients ranging between 0.77 and 0.89 for all four constructs. Secondly, the results from EFA support the four components of IE as initially conceptualised. Furthermore, EFA results support the preliminary expert opinions regarding the face validity and content validity of the instrument. Thus, in essence, the results signify the potential value of SEEQ as a useful instrument for measuring the four indicators of IE explored in this study. It is, however, imperative to review each key element of the results in relation to findings from prior research.

Firstly, the instrument was found to reliably and validly measure ES in the context of the participating HEI. The results support the value of importing the construct of consumer satisfaction from consumer behaviour (cf. Kotler, Keller, Koshy, & Jha, 2009; Sojkin, Bartkowiak, & Skuza, 2012) into the HE domain. Further, the constituent items that make up the ES construct (see Table 7) support the view of many others and that student ES is a function of both the content (i.e. quality of learning outcomes, quality of advising, quality of administrative and technical support) and process (i.e. the efficiency of administrative and technical operations as well as the school climate) of the learning environment (Maddox & Nicholson, 2014; Wilkins & Balakrishnan, 2013).

Secondly, the results show that SEEQ can be used as a reliable and valid instrument to measure AA, a construct recognised in the SA HE policy environment as arguably the most potent indicator of teaching and learning effectiveness (DHET, 2013). The results support the view that students’ perceptions of AA can be a credible proxy (in lieu of assessment scores, for instance) for determining levels of AA (Guay et al., 2010; Stadem et al., 2017).

Thirdly, the empirical results from this study demonstrate the usefulness of SEEQ to reliably and validly measure EM as an indicator of IE from a student’s perspective. This study conceptualises EM as a dynamic phenomenon (rather than a static outcome), focusing on an institution’s capacity to equip students with competences and attributes required to access (or create) and sustain gainful employment. The six items that signify EM in SEEQ draw heavily from (and, in turn, by virtue of the current results lend empirical credence to) Knight and Yorke’s (2002) Understanding, Skills, Efficacy beliefs and Metacognition (USEM) and to a lesser extent, Dacre Pool and Sewell’s (2007) CareerEDGE model of graduate EM. Also, importantly, the items focus on attributes that aim to build the individual into a potentially useful resource, regardless of context, thus advancing Schwartzman’s (2013) idea that the value of EM must extend beyond mere vocational needs.

Fourthly, the results demonstrate that SEEQ can be used as a reliable and valid instrument for measuring SR as an indicator of IE. In interrogating the construct of SR, SEEQ incorporates and affirms both components of retention, namely persistence (driven by factors such as academic and social engagement) and loyalty (driven by factors such as institutional image) as argued for many years by many scholars, notably Tinto (1982, 1998) and more recently, Angulo-Ruiz and Pergelova (2013), as well as Von Treuer and Marr (2013).

These results are to be understood against the backdrop of the demographic and contextual characteristics of the research sample, as these factors might influence reliability and validity scores (Cizek, 2016; Schaap & Kekana, 2016). Accordingly, it is important to keep in mind that the research sample was drawn from four schools (cf. faculties) of a SA HEI. The gender split of the sample was approximately 60:40 in favour of females. About 35% of the sample were aged 24 or younger and a further 34% were between 25 and 34 years old. The sample included students studying via various modes of delivery, including distance learning (26%), full-time contact learning (19%) and part-time contact learning (52%). Most (86%) of the participants were either currently employed or have had some work experience.

Analyses of the potential influence of the selected student demographic variables (see Table 1) on the SEOs indicated, in summary, that only two of these factors, namely, mode of delivery and work experience, appeared to materially influence the outcomes. More specifically, it was observed that full-time contact learning students (who also constituted the vast majority of students with no work experience) reported significantly lower levels of ES, EM and SR. No demographic variable was found to influence AA. A detail exposition of the results lies outside the scope of this article and is reported elsewhere (see Ayuk, 2016; 2017)

Practical implications

This article offers researchers and academic managers who might be interested in evaluating the performance of HEIs a coherent set of constructs against which such assessment may be conducted. More pertinently, it offers a simple, yet multidimensional instrument that might be considered for collecting credible data on the selected educational outcomes. By detailing the process of instrument design as well as demonstrating its reliability and credibility in a specific context, the article enables the user to appropriately judge the suitability of the instrument for the intended context.

A credible data collection instrument is imperative for the generation of quality information on which academic leaders and managers may make evidence-based decisions on matters such as instructional design, design of academic support programmes as well as work readiness and work-integrated learning programmes.

The proposed instrument might also be a starting point for other interested researchers to test the appropriateness and credibility of its use in different contexts and by so doing, contribute towards a more comparable body of knowledge on what we know about the ways and extent to which HEIs serve the educational needs of their students.

Limitations and recommendations

As noted earlier, the findings of this article are based on data from one institution, whose contextual factors might distinctly influence the test reliability and validity (Cizek, 2016; Schaap & Kekana, 2016). Hence, the application of the proposed instrument in multiple institutional (private and public HE) contexts would enhance opportunities for more generalisable claims regarding its reliability and validity.

Furthermore, the items included in the proposed instrument are by no means exhaustive. It can be expected that further interrogation and application of the instrument might result in the addition and/or removal of items in ways that further enhance its reliability and validity.

Further, findings from EFA can at best only provide preliminary support of the structural validity of an instrument. Further research requiring a new (and comparable) data set and using analytical techniques such as CFA or SEM is required to refine the theory resulting from this initial effort, thus strengthening the credibility (Lahey et al., 2012) of the proposed instrument.


This article demonstrates the reliability as well as the structural validity of an instrument that was used to gauge the effectiveness of a HEI in South Africa. The study operationalises IE in terms of four SEOs: AA, ES, EM and SR. The proposed instrument combines well-known indicators of institutional performance in a novel way. By touching on multiple outcomes, the instrument facilitates a more comprehensive interrogation of different dimensions of IE, as commonly perceived by students. The empirical results from this study showed that the reliability of the instrument was good. The results further indicated that structural validity of the instrument was adequate, supporting the four latent factors, initially theorised as distinct dimensions of IE.

The usefulness of the proposed instrument in measuring IE (even with respect to the four envisaged outcomes) is by no means conclusive. Further research based on other demographic contexts with more expansive data sets could lead to more robust interrogation of the psychometric properties of the proposed instrument and thus enhance its utility in effectiveness research in HE. It is thus hoped that this article will stimulate responses from other researchers that will contribute towards a more widely applicable instrument, which might enable the production of more comparable results. By so doing, the dividends from such studies can be more widely shared.


Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

The article’s design, abstract, introduction and discussion, as well as the research design were conceptualised jointly, but with G.J.J. fulfilling a leading writing role. The literature review, empirical design, execution, analyses and findings, as well as the references and proofreading were handled by P.T.A. Interpretations and implications were conceptualised and written by both authors.


Angulo-Ruiz, L. F., & Pergelova, A. (2013). The student retention puzzle revisited: The role of institutional image. Journal of Nonprofit and Public Sector Marketing, 25(4), 334–353. https://doi.org/10.1080/10495142.2013.830545

Archer, E., & Chetty, Y. (2013). Graduate employability: Conceptualization and findings from the University of South Africa. Progressio, 35(1), 134–165.

Artess, J., Hooley, T., & Mellors-Bourne, R. (2017). Employability: A review of the literature 2012–2016. York: Higher Education Academy.

Ashraf, G., & Kadir, S. (2012). A review on the models of organizational effectiveness: A look at Cameron’s model in higher education. International Education Studies, 5(2), 80–87. https://doi.org/10.5539/ies.v5n2p80

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25(4), 297–308.

Ayuk, P. T. (2016). Institutional culture and effectiveness of a South African private higher education institution. Unpublished thesis. Johannesburg: University of Johannesburg.

Ayuk, P. T. (2017). Predictors of educational satisfaction and student retention: Insights from a for-profit private higher education institution. In Proceedings of 11th International Business Conference, Dar Es Salaam, Tanzania, 24–27 September, pp. 1105–1119.

Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing construct validity in organizational research. Administrative Science Quarterly, 36(3), 421–458. https://doi.org/10.2307/2393203

Balduck, A. L., & Buelens, M. (2008). A two-level competing values approach to measure nonprofit organizational effectiveness. Belgium: Ghent University: Vlerick Leuven Gent Management School. Retrieved, March, 2016, from http://wps-feb.ugent.be/Papers/wp_08_510.pdf

Bartlett, M. S. (1954). A note on the multiplying factors for various chi-square approximations. Journal of the Royal Statistical Society, 16(Series B), 296–298.

Bitzer, E. M. (2003). Assessing students’ changing perceptions of higher education. South African Journal of Higher Education, 17(3), 164–177.

Boehmer, B. (2006). Strategic planning and quality assurance in U.S. higher education. Presentation to the Jilin University Management Training Institute, 14 July. Athens, GA: The University of Georgia. Retrieved May 06, 2017, from http://www.uga.edu/effectiveness/presentations.html

Brown, R. M., & Mazzarol, T. W. (2009). The importance of institutional image to student satisfaction and loyalty within higher education. Higher Education, 58, 81–95. https://doi.org/10.1007/s10734-008-9183-8

Cabrera-Nguyen, P. (2010). Author guidelines for reporting scale development and validation results in the Journal of the Society for Social Work and Research. Journal of the Society for Social Work and Research, 1(2), 99–103. https://doi.org/10.5243/jsswr.2010.8

Cameron, K. (1978). Measuring organisational effectiveness in higher education institutions. Administrative Science Quarterly, 23, 604–629. https://doi.org/10.2307/2392582

Cameron, K. S. (1986). Effectiveness as paradox: Consensus and conflict in conceptions of organizational effectiveness. Management Science, 32(5), 539–553. https://doi.org/10.1287/mnsc.32.5.539

Chew, K. S., Kueh, Y. C., & Aziz, A. A. (2017). The development and validation of the clinicians’ awareness towards cognitive errors (CATChES) in clinical decision making questionnaire tool. BMC Medical Education, 17(1), 58. https://doi.org/10.1186/s12909-017-0897-0

Choi, N., Fuqua, D., & Griffin, B. (2001). Exploratory analysis of the structure of scores from the multidimensional scales of perceived self-efficacy. Educational and Psychological Measurement, 61, 475–489. https://doi.org/10.1177/00131640121971338

Cizek, G. J. (2016). Validating test score meaning and defending test score use: Different aims, different methods. Assessment in Education: Principles, Policy and Practice, 23(2), 212–225. https://doi.org/10.1080/0969594X.2015.1063479

Cohn, M. (2012). Discovering the importance of student voice and active participation through the scholarship of teaching and learning. Teaching and Learning Together in Higher Education, 1(5), 1.

Cook-Sather, A. (2006). Sound, presence, and power: ‘Student voice’ in educational research and reform. Curriculum Inquiry, 36, 359–390. https://doi.org/10.1111/j.1467-873X.2006.00363.x

Creswell, J. W. (2014). Research design: Qualitative, quantitative and mixed-methods approaches. (4th edn.). Thousand Oaks, CA: Sage Publications.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed-methods research. (2nd edn.). Los Angeles, CA: Sage.

Dacre Pool, L., Qualter, P., & Sewell, P. J. (2014). Exploring the factor structure of the CareerEDGE employability development profile. Education and Training, 56(4), 303–313. https://doi.org/10.1108/ET-01-2013-0009

Dacre Pool, L., & Sewell, P. (2007). The key to employability: Developing a practical model of graduate employability. Education and Training, 49(4), 277–289.

Davis, A., Jansen van Rensburg, M., & Venter, P. (2016). The impact of managerialism on the strategy work of university middle managers. Studies in Higher Education, 41(8), 1480–1494. https://doi.org/10.1080/03075079.2014.981518

Deem, R., & Brehony, K. J. (2005). Management as ideology: The case of ‘New Managerialism’ in higher education. Oxford Review of Education, 31(2), 217–235. https://doi.org/10.1080/03054980500117827

Deem, R., Hillyard, S., & Reed, M. (2007). Knowledge, higher education, and the new managerialism: The changing management of UK universities. Oxford: Oxford University Press.

Delcourt, C., Gremler, D. D., van Riel, A. C., & van Birgelen, M. J. (2016). Employee emotional competence: Construct conceptualization and validation of a customer-based measure. Journal of Service Research, 19(1), 72–87. https://doi.org/10.1177/1094670515590776

Del Prato, D., (2013). Students’ voices: The lived experience of faculty incivility as a barrier to professional formation in associate degree nursing education. Nurse Education Today, 33(3), 286–290. https://doi.org/10.1016/j.nedt.2012.05.030

Department of Higher Education and Training (DHET). (2013). White paper for post-school education and training. Pretoria: DHET.

DeShields, O. W., Jr., Kara, A., & Kaynak, E. (2005). Determinants of business student satisfaction and retention in higher education: Applying Herzberg’s two-factor theory. International Journal of Educational Management, 19(2), 128–139. https://doi.org/10.1108/09513540510582426

DeVellis, R. F. (2003). Scale development: Theory and applications. (2nd edn.). Thousand Oaks, CA: Sage.

Díaz, P., & De León, A. T. (2016). Design and validation of a questionnaire to analyze university dropout – CADES. World Journal of Educational Research, 3(2), 267. https://doi.org/10.22158/wjer.v3n2p267

Donaldson, T., & Preston, L. E. (1995). The stakeholder theory of the corporation: Concepts, evidence, and implications. Academy of Management Review, 20(1), 65–91.

Drew, G. (2006). Balancing academic advancement with business effectiveness? International Journal of Knowledge, Culture and Change Management, 6(4), 117–125.

Estes, C. A. (2004). Promoting student-centered learning in experiential education. Journal of Experiential Education, 27(2), 141–160.

Etzel, J. M., & Nagy, G. (2016). Students’ perceptions of person–environment fit: Do fit perceptions predict academic success beyond personality traits? Journal of Career Assessment, 24(2), 270–288. https://doi.org/10.1177/1069072715580325

Fares, D., Achour, M., & Kachkar, O. (2013). The impact of service quality, student satisfaction, and university reputation on student loyalty: A case study of international students in IIUM, Malaysia. Information Management and Business Review, 5(12), 584–590.

Field, A. (2013). Discovering statistics using IBM SPSS statistics. n.p.: Sage Publications.

Fontaine, M. (2014). Student Relationship Management (SRM) in higher education: Addressing the expectations of an ever evolving demographic and its impact on retention. Journal of Education and Human Development, 3(2), 105–119.

Freeman, R. E. (1984). Strategic management: A stakeholder approach. Boston, MA: Pitman.

Gaskin, C.J., & Happell, B. (2014). On exploratory factor analysis: A review of recent evidence, an assessment of current practice, and recommendations for future use. International Journal of Nursing Studies, 51, 511–521. https://doi.org/10.1016/j.ijnurstu.2013.10.005

Giacobbi, P. R., Poczwardowski, A., & Hager, P. (2005). A pragmatic research philosophy for applied sport psychology. Retrieved from http://digitalcommons.brockport.edu/pes_facpub/80

Gibbs, G. (2010). Dimensions of quality. York: Higher Education Academy.

Glasser, H., & Powers, M. (2011). Disrupting traditional student-faculty roles, 140 characters at a time. Teaching and Learning together in higher education, 2. Retrieved, March, 2016, from http://repository.brynmawr.edu/tlthe/vol1/iss2/5

Guay, F., Ratelle, C. F., Roy, A., & Litalien, D. (2010). Academic self-concept, autonomous academic motivation, and academic achievement: Mediating and additive effects. Learning and Individual Differences, 20(6), 644–653. https://doi.org/10.1016/j.lindif.2010.08.001

Harvey, L. (2003). Student feedback [1]. Quality in Higher Education, 9(1), 3–20. https://doi.org/10.1080/13538320308164

Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research. Common errors and some comment on improved practice. Educational and Psychological Measurement, 66(3), 393–416. https://doi.org/10.1177/0013164405282485

Hilton, Y. R., & Gray, M. (2017). Improving minority student persistence: An institutional factors approach. In Gray, M & Thomas. K.D. (Eds.). Strategies for increasing diversity in engineering majors and careers (pp. 1–25). IGI Global. Retrieved, July, 2017, from https://www.igi-global.com/book/strategies-increasing-diversity-engineering-majors/172774

Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185.

Jacobs, G. J., & Jacobs, M. (2014). Role perceptions of Science academics who teach to first year students: The influence of gender. Australasian Journal of Institutional Research, 19(1), 33–45.

Judson, K. M., & Taylor, S. A. (2014). Moving from marketization to marketing of higher education: The co-creation of value in higher education. Higher Education Studies, 4(1), 51. https://doi.org/10.5539/hes.v4n1p51

Kaiser, H.F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141–151.

Khiat, H. (2013). Conceptualisation of learning satisfaction experienced by non-traditional learners in Singapore. Retrieved June 19, 2017, from https://rua.ua.es/dspace/bitstream/10045/29130/1/EREJ_02_02_02.pdf

Kinash, S., Crane, L., Schulz, M., Dowling, D., & Knight, C. (2014). Improving graduate employability: Strategies from three universities. In Ireland International Conference on Education (IICE), Ireland, April 2014. Retrieved May 25, 2017 from http://epublications.bond.edu.au/tls/80

Knight, P. T., & Yorke, M. (2002). Employability through the curriculum. Tertiary Education and Management, 8(4), 261–276. https://doi.org/10.1080/13583883.2002.9967084

Kohlberg, L., & Mayer, R. (1972). Development as the aim of education. Harvard Educational Review, 42(4), 449–496. https://doi.org/10.17763/haer.42.4.kj6q8743r3j00j60

Kolsaker, A. (2008). Academic professionalism in the managerialist era: A study of English universities. Studies in Higher Education, 33(5), 513–525. https://doi.org/10.1080/03075070802372885

Kotler, P., Keller, K. L., Koshy, A., & Jha, M. (2009). Creation customer value satisfaction and loyalty. Marketing Management, 13, 120–125.

Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. L. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in US colleges and universities. Urbana, IL: National Institute for Learning Outcomes Assessment.

Kwan, P., & Walker, A. (2003). Positing organizational effectiveness as a second-order construct in Hong Kong higher education institutions. Research in Higher Education, 44(6), 705–726. https://doi.org/10.1023/A:1026179626082

Lahey, B. B., Applegate, B., Hakes, J. K., Zald, D. H., Hariri, A. R., & Rathouz, P. J. (2012). Is there a general factor of prevalent psychopathology during adulthood? Journal of Abnormal Psychology, 121(4), 971. https://doi.org/10.1037/a0028355

Leckey, J., & Neill, N. (2001). Quantifying quality: The importance of student feedback. Quality in Higher Education, 7(1), 19–32.

Leedy, P. D., & Ormrod, J.E. (2014). Qualitative research. In P. D. Leedy & J. E. Ormrod (Eds.), Practical research: Planning and design (pp. 141–172). Thousand Oaks, CA: Sage Publications.

Leimer, C. (2011). The rise of institutional effectiveness: IR competitor, customer, collaborator, or replacement? Association for Institutional Research Professional File, 120. Tallahassee, FL: AIR.

Levin, B. (1998). The educational requirement for democracy. Curriculum Inquiry, 28, 57–79. https://doi.org/10.1111/0362-6784.00075

MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: Integrating new and existing techniques. MIS Quarterly, 35(2), 293–334. https://doi.org/10.2307/23044045

Maddox, E. N., & Nicholson, C. Y. (2014). The business student satisfaction inventory (BSSI): Development and validation of a global measure of student satisfaction. Developments in Business Simulation and Experiential Learning, 35, 101–112.

McLeod, J. (2011). Student voice and the politics of listening in higher education. Critical Studies in Education, 52(2), 179–189. https://doi.org/10.1080/17508487.2011.572830

Mega, C., Ronconi, L., & De Beni, R. (2014). What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. Journal of Educational Psychology, 106(1), 121. https://doi.org/10.1037/a0033546

Middle States Commission on Higher Education. (2005). Assessing student learning and institutional effectiveness. Philadelphia, PA: Middle States Commission on Higher Education.

Mintzberg, H. (2004). Managers not MBAs. San Francisco, CA: Berrett-Koehler.

Moufahim, M., & Lim, M. (2015). The other voices of international higher education: An empirical study of students’ perceptions of British university education in China. Globalisation, Societies and Education, 13(4), 437–454. https://doi.org/10.1080/14767724.2014.959476

Naidoo, P., Schaap, P., & Vermeulen, L. P. (2014). The development of a measure to assess perceptions of the advanced aircraft training climate. The International Journal of Aviation Psychology, 24(3), 228–245. https://doi.org/10.1080/10508414.2014.918441

Negricea, C. I., Edu, T., & Avram, E. M. (2014). Establishing influence of specific academic quality on student satisfaction. Procedia-Social and Behavioral Sciences, 116, 4430–4435. https://doi.org/10.1016/j.sbspro.2014.01.961

O’Leary, S. (2013). Collaborations in higher education with employers and their influence on graduate employability: An institutional project. Enhancing Learning in the Social Sciences, 5(1), 37–50. https://doi.org/10.11120/elss.2013.05010037

Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology, 8(5), 375–387. https://doi.org/10.1080/13645570500402447

Pallant, J. (2007). SPSS survival manual – A step by step guide to data analysis using SPSS for Windows. (3rd edn.). Berkshire: McGraw-Hill Open University Press.

Pegg, A., Waldock, J., Hendy-Isaac, S., & Lawton, R. (2012). Pedagogy for employability. York, UK: Higher Education Academy.

Quinn, R. E., & Rohrbaugh, J. (1981). A competing values approach to organisational effectiveness. Public Productivity Review, 5(2), 122–140. https://doi.org/10.2307/3380029

Quinn, R. E., & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Towards a competing values approach to organizational analysis. Management Science, 29(3), 363–377.

Reyes, M. R., Brackett, M. A., Rivers, S. E., White, M., & Salovey, P. (2012). Classroom emotional climate, student engagement, and academic achievement. Journal of Educational Psychology, 104(3), 700–712.

Roland, T. L. (2011). Applying the Baldrige organizational effectiveness model to the standards for accreditation of a higher education institution. International Journal of Humanities and Social Science, 17(1), 212–220.

Ronco, S. L., & Brown, S. G. (2000). Finding the ‘start line’ with an IE inventory. Paper presented at the 2000 annual meeting, Commission on Colleges of the Southern Association of Colleges and Schools, Atlanta, GA, 02–06 December.

Saunders, M. N., Lewis, P., & Thornhill, A. (2012). Research methods for business students. (6th edn.). Essex, England: Pearson Education Limited.

Schaap, P., & Kekana, E. (2016). The structural validity of the experience of Work and Life Circumstances Questionnaire (WLQ). SA Journal of Industrial Psychology, 42(1), 1–16. https://doi.org/10.4102/sajip.v42i1.1349

Schmidtlein, F. A., & Milton, T. H. (1989). College and university planning: Perspectives from a nationwide study. Planning for Higher Education, 17(3), 1–19.

Schreiber, B., Luescher-Mamashela, T., & Moja, T. (2014). Tinto in South Africa: Student integration, persistence and success, and the role of student affairs. Journal of Student Affairs in Africa, 2(2), v–x.

Schreiner, L. A., & Nelson, D. D. (2013). The contribution of student satisfaction to persistence. Journal of College Student Retention: Research, Theory and Practice, 15(1), 73–111. https://doi.org/10.2190/CS.15.1.f

Schwartzman, R. (2013). Consequences of commodifying education. Academic Exchange Quarterly, 17(3), 1096–1463.

Scott, G., Bell, S., Coates, H., & Grebennikov, L. (2010). Australian higher education leaders in times of change: The role of Pro Vice-Chancellor and Deputy Vice-Chancellor. Retrieved May 22, 2017, from http://www.tandfonline.com/doi/abs/10.1080/1360080X.2010.491113

Seale, J. (2010). Doing student voice work in higher education: An exploration of the value of participatory methods. British Educational Research Journal, 36(6), 995–1015. https://doi.org/10.1080/01411920903342038

Seale, J., Gibson, S., Haynes, J., & Potter, A. (2015). Power and resistance: Reflections on the rhetoric and reality of using participatory methods to promote student voice and engagement in higher education. Journal of further and Higher Education, 39(4), 534–552.

Shilbury, D., & Moore, K. A. (2006). A study of organizational effectiveness for national Olympic sporting organizations. Nonprofit and Voluntary Sector Quarterly, 35(1), 5–38. https://doi.org/10.1177/0899764005279512

Slade, P., & McConville, C. (2006). The validity of student evaluations of teaching. International Journal for Educational Integrity, 2(2), 43–59.

Soldner, M., Smither, C., Parsons, K., & Peek, A. (2016). Toward improved measurement of student persistence and completion. Washington, DC: American Institutes for Research.

Sojkin, B., Bartkowiak, P., & Skuza, A. (2012). Determinants of higher education choices and student satisfaction: The case of Poland. Higher Education, 63(5), 565–581. https://doi.org/10.1007/s10734-011-9459-2

Stadem, P., Ginsburg, A. D., Hafferty, F., Lachman, N., Pawlina, W., & Langley, N. (2017). Sink, swim, or grab a lifesaver: First year medical student perception of academic performance and tutoring in an anatomy course. The FASEB Journal, 31(1 Suppl), 732–716.

Subramanian, J., Anderson, V. R., Morgaine, K. C., & Thomson, W. M. (2013). The importance of ‘student voice’ in dental education. European Journal of Dental Education, 17(1), 136–141.

Sung, M., & Yang, S. (2008). Toward the model of university image: The influence of brand personality, external prestige, and reputation. Journal of Public Relations Research, 20(4), 357–376. https://doi.org/10.1080/10627260802153207

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. (5th edn.). Needham Height, MA: Allyn & Bacon.

Temple, P., Callender, C., Grove, L., & Kersh, N. (2014). Managing the student experience in a shifting higher education landscape. Heslington, York: The Higher Education Academy.

Tinto, V. (1982). Limits of theory and practice in student attrition. Journal of Higher Education, 53, 677–700. https://doi.org/10.1080/00221546.1982.11780504

Tinto, V. (1998). Colleges as communities: Taking research on student persistence seriously. Review of Higher Education, 21(2), 167–177.

Tucker, A., & Bryan, R. A. (1988). The academic dean: Dove, dragon and diplomat. American Council on Education, Washington, DC: Oryx Press Series.

Van Schalkwyk, R. D., & Steenkamp, R. J. (2014). The exploration of service quality and its measurement for private higher education institutions. Southern African Business Review, 18(2), 83–107.

Velicer, W.F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41(3), 321–327.

Vianden, J., & Barlow, P. J. (2014). Showing the love: Predictors of student loyalty to undergraduate institutions. Journal of Student Affairs Research and Practice, 51(1), 16–29. https://doi.org/10.1515/jsarp-2014-0002

Volkwein, J. F. (2007). Assessing institutional effectiveness and connecting the pieces of a fragmented university. In J. C. Burke (Ed.), Fixing the fragmented university (pp. 145–180). Bolton, MA: Anker.

Volkwein, J. F. (2011). Gaining ground: The role of institutional research in assessing student outcomes and demonstrating institutional effectiveness. NILOA Occasional Paper No. 11. Urbana, IL: National Institute for Learning Outcomes Assessment.

Von Treuer, K., & Marr, D. (2013). Tracking student success: Who is falling through the cracks? Sydney, NSW: Australian Government, Office for Learning and Teaching. Retrieved from http://hdl.handle.net/10536/DRO/DU:30060693

Welsh, J. F., & Metcalf, J. (2003). Cultivating faculty support for institutional effectiveness activities: Benchmarking best practices. Assessment and Evaluation in Higher Education, 28(1), 33–45. https://doi.org/10.1080/02602930301682

Wilkins, S., & Balakrishnan, M. S. (2013). Assessing student satisfaction in transnational higher education. International Journal of Educational Management, 27(2), 143–156. https://doi.org/10.1108/09513541311297568

Woodall, T., Hiller, A., & Resnick, S. (2014). Making sense of higher education: Students as consumers and the value of the university experience. Studies in Higher Education, 39(1), 48–67. https://doi.org/10.1080/03075079.2011.648373

Yorke, M. (2010). Employability: Aligning the message, the medium and academic values. Journal of Teaching and Learning for Graduate Employability, 1(1), 2–12. https://doi.org/10.21153/jtlge2010vol1no1art545

Crossref Citations

No related citations found.