THOMPSON RIVERS UNIVERSITY Exploring Faculty Members’ Experience of Program Review by Claire Sauvé A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Education KAMLOOPS, BRITISH COLUMBIA, CANADA DECEMBER, 2023 Supervisor: Dr. Gloria Ramirez Co-Supervisor: Dr. Karen Densky Internal Examiner: Dr. Alana Hoare External Examiner: Dr. Catharine Dishke Hondzel ABSTRACT Program review is an integral aspect of quality assurance in higher education. This qualitative case study explores faculty members’ experiences leading departments through program review at a British Columbia vocational institution, focusing on the agency of the involved faculty members, meaningfulness and manageability of the process, and the impact of explicit and hidden assumptions and power dynamics. Four themes are explored: purpose and impact, project structure and process, time and workload, and power and relational dynamics. Findings reveal that program review is meaningful when framed as a collaborative and reflective exercise focused on program improvement and connected to institutional planning, with consideration for equity, access, and decolonization. Additionally, program review is both meaningful and manageable when the process is well resourced with adequate time to support fulsome engagement and when data collection and analysis methods are robust and inclusive. Twenty-one recommendations and four suggestions for implementation are provided, emphasizing central coordination, adaptability to departmental factors, consideration of equity and access, and intra- and inter-institutional collaboration. The study concludes that, with adequate resources, time, and support, program review can be a catalyst for institutional and program improvement, benefiting faculty and students. Keywords: program review, quality assurance, higher education, vocational training, educational leadership. ii ACKNOWLEDGEMENTS Writing this thesis while myself leading an external accreditation process has been a fractal experience, turbulent and rewarding, melding my personal, professional, and scholarly activities. I would not have been able to complete this project without the support of many people in my life. Most importantly, to the participants of this study, thank you for your honest and vulnerable sharing of your experiences and your wisdom. Your willingness to share these insights is a gift that I hope will have a ripple effect towards improvements in quality assurance practices at local institutions and more broadly. To my supervisors Drs. Karen Densky and Gloria Ramirez, thank you for your support and mentorship, your way-finding, and for seeing the potential in this project. To my committee member Dr. Alana Hoare, thank you for your expert feedback and advice. It has been a great and distinct honour to work with all three of you. To my partner Chris, thank you for your unwavering support and the lengths you go to, to ensure that I have the time I need to continue to develop personally and professionally. To my kid Niko, thank you for your companionship and your patience with my periods of absence, distraction, and busyness as I have focused on this project and paid work. I am beyond proud of you both, and so excited to see your continued contributions to this world. To my critical friends Todd and Erin, thank you for your companionship and conversation, and for acting on our shared commitments to support one another’s professional successes, engage in critical reflection and ongoing development, and challenge each other as we approach educational leadership through our different lenses. To Kathy and David, thank you for being my greatest cheerleaders. To my dad Reg and my sister Laura, thank you for supporting and inspiring me to use my own gifts and privileges to improve the world for those around me. The encouragement I receive from each of you sustains me and enables me to continue to do the work that I do. To all of the faculty members and administrators supporting quality assurance in postsecondary education across Turtle Island and around the world, thank you for the collective efforts you take to ensure that students are at the centre of these processes, and that they can be both meaningful and manageable for those leading and participating in these processes. iii DEDICATION This thesis is dedicated to my mother, Terttu, who always encouraged me to follow my dreams, and to Niko, in whom I see her reflected. May your curiosity and compassion guide you to great things. iv TABLE OF CONTENTS ABSTRACT.............................................................................................................................. ii ACKNOWLEDGEMENTS ..................................................................................................... iii DEDICATION ......................................................................................................................... iv TABLE OF CONTENTS .......................................................................................................... v LIST OF TABLES ................................................................................................................. viii LIST OF ABBREVIATIONS .................................................................................................. ix LIST OF APPENDICES ........................................................................................................... x Chapter 1 Introduction .............................................................................................................. 1 Chapter 2 Literature Review ..................................................................................................... 3 2.1 Overview of the Purpose and Implications of Program Review..................................... 3 2.2 Definitions of Quality in Program Review ..................................................................... 5 2.2.1 Complexities and Challenges in Defining Quality .................................................. 5 2.2.2 Conceptual Frameworks for Defining Quality ........................................................ 6 2.3 Historical Contexts of Program Review ......................................................................... 9 2.4 Structural and Process-Oriented Considerations in Program Review .......................... 10 2.4.1 Elements of Program Review .................................................................................11 2.4.2 Balancing Homogenization and Differentiation .................................................... 12 2.4.3 Recommendations .................................................................................................. 13 2.5 Faculty Perceptions and Experiences of Program Review ........................................... 15 2.5.1 Faculty Engagement............................................................................................... 16 2.5.2 Agency ................................................................................................................... 17 2.5.3 Collaboration and Coordination............................................................................. 19 2.6 Sociopolitical, Theoretical, and Colonial Contexts ...................................................... 20 2.6.1 Underlying Theoretical Frameworks ..................................................................... 20 v 2.6.2 Accountability and Outcomes-based Funding Models .......................................... 21 2.6.3 Time ....................................................................................................................... 23 2.6.4 Colonial Structures in Quality Assurance and Program Review ........................... 24 2.7 Identified Gap in the Literature..................................................................................... 26 Chapter 3 Methodology .......................................................................................................... 27 3.1 Positionality: Self as Researcher ................................................................................... 27 3.2 Program Review at City College .................................................................................. 29 3.3 Qualitative Case Study Methodology ........................................................................... 32 3.4 Narrative Analysis and Case Study ............................................................................... 33 3.5 Participants .................................................................................................................... 34 3.6 Setting and Introduction of Participants ....................................................................... 36 3.6.1 City College ........................................................................................................... 36 3.6.2 Jamie ...................................................................................................................... 36 3.6.3 Kira ........................................................................................................................ 37 3.6.4 Px007 ..................................................................................................................... 37 3.6.5 Seth ........................................................................................................................ 37 3.6.6 Shane ...................................................................................................................... 37 3.7 Data Collection, Preparation, and Analysis .................................................................. 39 3.8 Ethics Approval ............................................................................................................. 42 Chapter 4 Findings: Descriptions of Faculty Members’ Experiences of Program Review .... 43 4.1 Introduction ................................................................................................................... 43 4.2 Themes .......................................................................................................................... 43 4.2.1 Purpose and Impact ................................................................................................ 43 4.2.2 Project Structure and Process................................................................................. 65 4.2.3 Time and Workload ................................................................................................ 78 vi 4.2.4 Power and Relational Dynamics ............................................................................ 84 4.3 Conclusion .................................................................................................................... 91 Chapter 5 Discussion: Interpretations and Recommendations ............................................... 92 5.1 Introduction ................................................................................................................... 92 5.2 Cross-Case Interpretation of the Findings and Issues ................................................... 93 5.2.1 Purpose and Impact ................................................................................................ 93 5.2.2 Project Structure and Process................................................................................. 98 5.2.3 Time and Workload .............................................................................................. 101 5.2.4 Power and Relational Dynamics .......................................................................... 103 5.3 Recommendations ....................................................................................................... 105 5.3.1 Suggestions for Implementation .......................................................................... 109 5.4 Limitations and Future Studies ................................................................................... 109 Chapter 6 Conclusion............................................................................................................. 111 References ..............................................................................................................................112 Appendices ............................................................................................................................ 121 Appendix 1 Thompson Rivers University Ethics Certificate of Approval ....................... 121 Appendix 2 “City College” Research Ethics Certificate of Approval .............................. 121 vii LIST OF TABLES Page 39 Table 1: Summary of Participants and Program Page 41 Table 2: Abbreviated Jefferson Transcription Symbols Page 42 Table 3: Initial Themes Page 44 Table 4: Themes and Sub-themes Page 107 Table 5: Recommendations by Issue and Response Statement viii LIST OF ABBREVIATIONS BC British Columbia DQAB Degree Quality Assurance Board QAPA Quality Assurance Process Audit VP Vice President IR Institutional Research MS Microsoft TRU Thompson Rivers University CD Curriculum Development EDI Equity, diversity, and inclusion GBA+ Gender-based Analysis Plus ix LIST OF APPENDICES Appendix 1 Research Ethics approval from Thompson Rivers University Appendix 2 Research Ethics approval from City College (pseudonym) x Chapter 1 Introduction Program review 1 is an integral aspect of quality assurance in public post-secondary education in North America. I support this process in my work as an administrator with a professional focus on program evaluation and approval at a mid-sized urban college in Canada. Guiding departments through the process I observed that department members’ experiences varied widely as each of them progressed through, struggled with, invested hope in, and was challenged by this demanding process. Faculty and department leaders come from a variety of educational, vocational, developmental, and artistic backgrounds, and from programs of varying sizes and resources. Some programs have not gone through review in a decade or more and have limited evaluative data to begin with, and it is common for faculty to become overwhelmed as they begin the process. Upon completion of different reviews, I observed a range of outcomes. In some cases, program reviews inspired departments and resulted in clearly observable improvements to and growth of program and department improvements. On other occasions, department members (faculty and instructors, department leaders, and program coordinators) experienced stress and disorientation in the process of program review. Existing research on faculty members’ experiences of program review is sparse (Mussawy & Rossman, 2018; Senter et al., 2020). Much of the existing literature on program review focuses on theoretical guidelines and frameworks (McGowan, 2019), on a lack of clarity about either the purpose of quality assurance or the ways in which these processes can foster commitment to quality (Groen, 2017), or how quality assurance processes foster either differentiation or homogenization (Skolnik, 2016). Some scholars, evaluating the effectiveness of program review have highlighted the distinction between prioritization and program review and the importance of resources for both review and implementation of program improvements (Harlan, 2012), and the extent to which program review actually leads to improvements in departmental outcomes and experiences (Senter et al., 2020), and theoretical frameworks and recommendations for collaboration amongst levels of leadership 1 This internal, formalized process is referred to by various names at other institutions, including program review, Academic program review, Program Quality Review, Periodic program review and other names. For the purposes of this paper, I use program review. 1 (Lock et al., 2018). Responding to the lack of existing scholarship, grounded in critically reflective and dialogic practice, and influenced by critical policy theory, this study seeks to explore faculty members’ experiences leading departments through the process of program review. In some institutions and departments, this may be a faculty member that is seconded for a period of time, and at others it will be a program coordinator or department leader that balances this quality assurance activity alongside their regular workload, as is often the case at the institution where this study is located. The research focuses on the following question: what is the experience of department leaders and program coordinators leading program review? This paper is organized into six sections. The Literature Review chapter summarizes the existing scholarship that explores faculty experience in quality assurance processes, implied and explicit purposes, and implications of these processes. A brief historical overview of program review and definitions of quality are provided followed by a presentation of process-related recommendations that have been raised by researchers. Existing research on faculty members’ experiences of agency and quality assurance processes are discussed as they relate to some of the socio-political and colonial aspects of quality assurance, including globalization, neoliberalism, and chronopolitics, which, Vettori (2023) defines as the “relation of temporality within a broader (political) context” (p. 3). The Methodology chapter discusses the researcher’s (my) positionality and theoretical influences, and the design of the collective instrumental case study, including a discussion of the research question and related issues. Participant recruitment, data collection tools and methods, and the narrative data analysis framework, is also described. The Findings chapter introduces the setting and participants of the study and presents the stories that they relayed, presented with a first level of analysis. These narratives are explored with an emphasis on participants’ voices through four thematic lenses: purpose and impact, project structure and process, time and workload, and power and relational dynamics. The Discussion and Recommendations chapter presents a cross-case analysis of the emergent themes identified above, framed through the issues connected to the primary research question. The chapter includes a series of recommendations and suggestions for implementation and concludes with a discussion of the limitations of the study and suggestions for future research. Finally, the Conclusion provides a brief summary and concluding remarks. 2 Chapter 2 Literature Review The purposes of program review, both explicitly stated and implied, and the ways that faculty members experience these processes are the prevailing concepts for this literature review, beginning with a broad overview of some implications of quality assurance processes. Definitions of quality within program review are explored in some depth, with a focus on existing conceptualizations of quality as they have been referred to over several decades. A brief historical context of program review is provided focusing on elements that have remained stable over the past several decades. A summary of the structural- and process-related recommendations that have arisen from research is presented, with a focus on external versus internal processes and how institutions are dealing with increasing complexities in quality assurance. The chapter continues to review existing work on faculty members’ experiences of quality assurance processes, with a focus on agency. Finally, some of the socio-political aspects of program review are discussed, including globalization and neoliberalism, the chronopolitics of quality assurance, and the entrenchment of these mechanisms within colonial structures. 2.1 Overview of the Purpose and Implications of Program Review Broadly speaking, the overarching purpose of program review is to ensure and improve educational quality and revitalize curriculum (Conrad & Wilson, 1985; McGowan, 2019), to maintain academic standards, and to demonstrate evidence of continuing improvement of academic programs (Creamer & Janosik, 1999; Davis et al., 2020). While academic program review is already considered a best (and even expected) practice for ensuring quality, institutions are seeking to make the process more robust “by qualifying guidelines and instructions about the process in order to use results for more strategic purposes, such as demonstrating impact” (McGowan, 2019, p. 53). Beyond improving and assuring academic quality, program reviews are viewed as serving broader purposes such as evaluating program feasibility, viability, and priorities, alignment of mission compatibility, and demonstrating accountability, reporting, and transparency (Creamer & Janosik, 1999; McGowan, 2019). Ideally, a comprehensive program review will include student learning outcomes data, focus on program improvement, and be tailored to the individual institution and program (Davis et al., 2020). At the same time, while these processes may benefit programs and focus on improvement, as Skolnik (1989, 2016) pointed out, quality assurance 3 practice have the potential to impact institutional diversity, and in particular, to “suppress diversity, innovation, and nonconformist approaches in the search for knowledge” (1989, p. 638). Dickeson (2010) distinguished between program review and prioritization, and stated that review processes, which typically involve self-study and are focused on program improvements, hold underlying assumptions that programs will continue regardless of resources at an institution. Advocating for reallocation based on prioritization, Dickeson contended that “resources are insufficient because they are being consumed by other programs, some of which may be of lesser value to the institution and its future" (p. 60). Considering this, it becomes apparent that while academic quality has always been at the core of program review, so has decision making around resource allocation, program discontinuance, program offerings (Conrad & Wilson, 1985; Coombs, 2022), and the need to “respond creatively to financial constraints and external expectations for accountability” (Conrad & Wilson, 1985, p 2). As Ikenberry (2010) stated in the foreword to Dickeson (2010), “the relationship between academic quality and financial resources has always been apparent” (p. xiv). Over the years, scholars have also considered the value of program review and the impact of program review and external accreditation processes on faculty, programs, and institutions. Some have focused on the long-term effects of the changes driven by these activities (Barak, 2007; Conrad & Wilson, 1985) while others have focused on the process itself (Creamer & Janosik, 1999; Germaine & Spencer, 2016). Creamer and Jasonik (1999) posited that a strength of program review was on-going quality assurance checks incentivized at the institutional level, while a weaknesses was focusing on the review process itself, which can be time-consuming and expensive, over the results or implementation of recommendations. On the other hand, as Germaine and Spencer (2016) defended that the true value of a quality assurance process is “less about specific assessment results, and more about the impact of the process on faculty” (p.90). Program review, and quality assurance processes in general, have the potential for broad-reaching implications, such as “providing an outlet for questions that tackle the very future of higher education and higher education institutions" (Vettori, 2018, p. 86). Yet, as Hoare et al. (2022) described, there is ongoing discontent with the ability for these processes 4 to impact institutional planning. This can be exacerbated by misalignment with planning and review cycles and a disjunction of data collection for planning and review purposes. These disconnects can be overcome by engaging stakeholders in a variety of educational quality processes (Groen, 2017), and linking program review to budgeting and planning to contribute to “fair and transparent institutional processes” (Davison et al., 2009, p. 44). While it is widely accepted that program reviews should be integrated with institutional planning, this is not always reflected in the experience of participating faculty members (Barak, 2007; Coombs, 2022). 2.2 Definitions of Quality in Program Review This section explores the landscape of definitions of quality in the context of quality assurance in higher education. While there are complexities and challenges in creating definitions for quality, which can be nebulous and nuanced, there are several conceptual frameworks and interpretations which offer insights into the intricate nature of quality in higher education, and how these impact faculty experiences of program review. 2.2.1 Complexities and Challenges in Defining Quality Scholars have discussed the challenges in and importance of defining quality (Groen, 2017; Harvey & Green, 1993; Schindler et al., 2015; Vettori, 2018). As Vettori (2018) explained, there is a nebulous aspect of quality, which has the “rather dubious honour of being one of the most intangible key concepts in higher education discourse” (p. 85). As Harvey and Green (1993) described, quality is often considered a relative concept. It is relative to both the “user of the term and the circumstances in which it is invoked” (p. 10), and there are many different parties in higher education, with many different perspectives and motivations which vary over time. Furthermore, the degree to which quality is considered absolute is itself relative (Harvey & Green, 1993). Similarly, Mussawy and Rossman (2018), exploring faculty perceptions of quality assurance processes in higher education in Afghanistan, asserted that while “concept of quality as a complex discourse has received various interpretations in the context of higher education” (p. 11), no agreement on a unified definition of quality serving all purposes in higher education exists in the scholarship. Discussing challenges in defining quality, Schindler and colleges (2015), additionally highlighted that quality “is a multidimensional concept" (p.4). Although complex, it is important for research in quality assurance to consider definitions and 5 frameworks that define quality. As Harvey and Green concluded, In the last resort quality is a philosophical concept. Definitions of quality vary and, to some extent, reflect different perspectives of the individual and society. In a democratic society there must be room for people to hold different views: there is no single correct definition of quality. (p. 28) 2.2.2 Conceptual Frameworks for Defining Quality While there may be no one correct definition, there are numerous models and frameworks used to describe and define quality in the literature reviewed, and many definitional elements within these. Schindler et al. (2015), in their international literature review focusing on definitional aspects of quality assurance and challenges in defining quality, considered drivers of quality and identified two strategies for defining quality: stakeholder-driven, and standards-driven. Some of the stakeholder-driven definitions include those that focus on resources (Cheng & Tam, 1997; Conrad & Wilson, 1985; Schindler et al., 2015), value (Conrad & Wilson, 1985; Harvey & Green, 1993), accountability (Schindler et al., 2015; Vettori, 2018), transformation and change (Cheng & Tam, 1997; Harvey & Green, 1993; Schindler et al., 2015; Vettori, 2018), satisfaction (Cheng & Tam, 1997; Conrad & Wilson, 1985), and reputation (Cheng & Tam, 1997; Conrad & Wilson, 1985; Vettori, 2018). The standards-driven definitions include models aligned with excellence and exceptionality (Harvey & Green, 1993; Schindler et al., 2015), consistency, process, and purpose (Cheng & Tam, 1997; Harvey & Green, 1993; Vettori, 2018), and mission (Cheng & Tam, 1997; Vettori, 2018). Considering the stakeholder-driven definitions, we first consider Cheng and Tam (1997) who, in their research project on defining educational quality in the Hong Kong educational system, developed a framework of multi-models of quality in education, and outlined seven models of education quality including the resource-input model, which they defined as institutions’ achievement of needed resources and inputs and recommended in situations in which “quality resources for the institution are scarce” (p. 24). Schindler et al. (2015) described the resource efficiency model and claimed that quality can be evaluated based on how effectively and efficiently resources are used in the delivery of educational services. Conrad and Wilson (1985)’s resources lens considers the assets that are available to and used by the program. 6 Harvey and Green (1993), in their research exploring the nature of quality as a concept in higher education, included the value for money model, which they described as “a populist notion of quality equates it with value” (p. 22). They emphasized that model refers to the relationship between quality and cost-effectiveness. The value is assessed based on the return on investment, considering the resources used. Conrad and Wilson (1985) defined the value-added lens as focusing on how much the educational program contributes to a students’ learning while they are enrolled, including both knowledge and personal development. The accountability-based definitions refer to responsible and optimal use of resources in delivery of educational services (Schindler et al., 2015). Vettori (2018), in their study of the Austrian higher education system and through a reconstructive-interpretative approach rooted in hermeneutics, identified five competing and yet complimentary interpretive patterns of quality including the consumer protection model which assumes that higher education institutions are service providers with specific groups of clients or stakeholders whose interests need to be safeguarded. Several of the conceptual frameworks are focused on transformation and change. Harvey and Green’s (1993) transformation and change model focused on enhancing participants' experiences and empowering them to undergo fundamental, positive changes, where Schindler’s transformational model focused on the degree to which programs effect positive change in students’ lives (Schindler et al., 2015). Others, such as Cheng and Tam (1997) focus on organizational learning, involving continuous development and improvement through learning and adaptation. Similarly, Vettori’s (2018) quality-engineering model is “deeply infused with the ambition to create a ‘better’ organisation by re-engineering its internal processes and structures” (p. 95). Cheng and Tam’s (1997) satisfaction model measured quality “by the extent to which the performance of an educational institution can satisfy the needs and expectations of its powerful constituencies” (p. 26). Cheng and Tam explained that this view of quality assumes that the satisfaction constituents are key to the survival of an institution. Similarly, Conrad and Wilson’s (1985) outcomes lens considered the quality of the educational product as measured by means such as student accomplishments, faculty publications, and employer satisfaction. The final stakeholder-driven definitional model considered here relates to reputation. 7 Conrad and Wilson (1985) referred to a reputation lens which emphasizes that quality cannot be directly measured and must be inferred “through the judgments of experts in the field” (Conrad & Wilson, 1985, p. 3). Cheng and Tam (1997) refer to the legitimacy model, gaining a legitimate position and reputation in the community (Cheng & Tam, 1997), where Vettori (1998) refers to the entrepreneurial pattern of quality which assumes that higher education institutions are in competition with one another in an international market for students, reputation and funding, and must develop business strategies to gain an adequate share of the respective resources. Considering the models that are standards-driven, Schindler et al. (2015) referred to the exceptional model, which considers quality as achievement of distinction in service and products (Schindler et al., 2015). Harvey and Green defined the exceptional view of quality as something special, which may be distinctive, excellent, or exceeding minimum standards (Harvey & Green, 1993). Harvey and Green (1993) described the perfection, or consistency model as a view of quality that focuses on setting expectations and meeting them consistently and the fitness for purpose model as related to how well a program or service fulfills an intended purpose. Cheng and Tam (1997) described their process model as linked to smooth and efficient internal processes, while their absence of problems model relates to a lack of troubles and difficulties. Relatedly, Vettori (2018) referred to the managerial pattern, which equates quality with corporate measures such as effectiveness, efficiency, and productiveness, and assesses performance based on setting and meeting performance goals. Vettori also outlined the educative pattern of quality, built on the premise that universities, though autonomous, must be carefully developed and guided by an “overarching governing or regulatory body that, using a mixture of rules, regulations, incentives, sanctions and ‘learning opportunities’, such as pilot projects” (p. 93) which are then checked by the same governing body (Vettori, 2018). While Vettori’s pattern is somewhat procedural in description, it recalls the goal and achievement model, through which Cheng and Tam (1997) described quality as the achievement of meeting institutional goals and standards, and Schindler’s purposeful mode, which measures quality through alignment to stated missions and values. A mission-driven definition of quality in program review suggests that institutions 8 define their own success indicators. This is the case in British Columbia (BC), the jurisdiction where this study takes place, where the Degree Quality Assurance Board (DQAB), an independent advisory committee under the purview of the Government of BC, holds the responsibility for ensuring that legislated quality assurance standards for higher education in BC are upheld. One of the mechanisms for doing so is the Quality Assurance Process Audit (QAPA) (Quality Assurance Process Audit, n.d.), an “external review process to ensure that public post-secondary institutions periodically conduct rigorous, ongoing program and institutional quality assessment” (para. 1). In the QAPA handbook, it states that, as part of the audit, institutions are to produce a self-study that includes evidence of a formal and approved policy for periodic review of programs against a number of criteria, including the “continuing appropriateness of the program’s structure, admissions requirements, method of delivery and curriculum for the program’s educational goals and standards” (p. 9). Conrad and Wilson (1985) asserted that as each of these lenses hold importance and none are sufficient in and of themselves, quality ought to be measured through multiple indicators. While there are no universally agreed-upon definitions of quality, through these multiple interpretations, we can rely on the longstanding conceptualizations highlighted by scholars to explore the meaning and meaningfulness of quality in the perceptions of faculty going through program review in BC. 2.3 Historical Contexts of Program Review Similar to the definitions of quality within quality assurance practices, the “origin of program review varies considerably depending on how it is defined" (Harlan, 2012, p. 740). Conrad and Wilson (1985) identified that the practice of program review is a quality assurance mechanism deeply rooted in North American post-secondary education, with history traced from colonial and antebellum colleges up to modern universities (Conrad & Wilson, 1985). In a study of the first Oklahoma state-wide evaluation project of teacher education programs, Vance (1955) outlined a process that has many of the same elements as the current program review process at City College. First, all institutions engaged in a self-evaluation of all aspects of their teacher education programs. Second, visiting committees, chosen from all levels of the teaching professions, evaluated each institution. These visiting committees submitted reports of Findings and Recommendations to the State Board after completing the 9 evaluation of each institution. The reports showed strengths and weaknesses of teacher education programs in each institution. The reports also included recommendations for approval of each certification program and suggestions for improving areas of weaknesses. (p. 2) In the Canadian context and under the Constitution of Canada, education and the quality assurance thereof are handled by the provinces. As Baker and Miosi (2010) discussed in their review of quality assurance activities at degree-granting public institutions across the country, the oldest forms of quality assurance in Canadian public universities are through professions (for example, physicians, lawyers, and architects) that involve graduates undergoing further assessment in the workplace, and passing licensing examinations. The standards are historically not only academic but also professional. Through the past century in Canada, quality assurance agencies have been initiated in most provinces, and although there is a wide array of policies, there are some discernible patterns and similarities. The process in Canada in the 20th and 21st centuries most commonly incorporates the three primary components outlined in Oklahoma in 1955: 1) a self-study that addresses set criteria and standards, 2) an external assessment including a site visit and report with recommendations, and 3) an action plan in response to the recommendations (Baker & Miosi, 2010; Vance, 1955). Barak (2007), looking at 30 years of state-level academic review and approval in the United States, described a situation that is similar in the Canadian context: “many of the basic elements of the policies and procedures... for both program review and program approval have changed little over the years” (p.14). It is beyond the scope of this project to delve more deeply into the history of program review, but the stability of these features can provide us some insight into the importance and meaning of the process and methods and how these are experienced by faculty members. 2.4 Structural and Process-Oriented Considerations in Program Review Much of the research about program review focuses on structural and process elements and offers critiques and recommendations for current models, processes, and structures. In addition to definitions of quality, as Harlan (2012) emphasized, “the criteria covered in program review will be determined to a great extent by its purpose, and it is precisely for this reason that a clear definition is imperative” (p. 743). 10 2.4.1 Elements of Program Review In discussing the purpose of program review, scholars have explored the elements should be included in program review. Some focuses of program reviews in the literature include programmatic elements such as curriculum and learning outcomes (Davison et al., 2009; Jayachandran et al., 2019); overarching educational and academic principles (Davison et al., 2009); and institutional and organizational elements such as facilities and resources (Jayachandran et al., 2019), feasibility, priorities, and organizational dependencies (McGowan, 2019). There has additionally been considerable focus on the improvementpurpose of program review (Davis et al., 2020; McGowan, 2019), the inputs involved in the process (Davis et al., 2020; McGowan, 2019), and the importance of alignment with quality assurance principles (Davison et al., 2009; Jayachandran et al., 2019; McGowan, 2019). Referring to the programmatic and educational elements of program review, Jayachandran and colleagues (2019) conducted a case study drawing upon their own experience as faculty members observing the review process at a small university in Edmonton, Alberta. They found that a program review should include “comprehensive examination of the curriculum” (p. 57), and direct and indirect evidence regarding student learning outcomes. Similarly, Davison et al. (2009), in their California state-sponsored research set out a standard for program review in the community college system and suggested that a well-developed review process should be directed towards teaching and learning and should be derived from well-considered academic values. Considering the institutional and organizational elements of program review, McGowan (2019), conducted a study examining program review processes at 53 small-tolarge public institutions in the United States using a content analysis methodology. They found that the increasing complexity of evaluation processes, guidelines, and instruments and the ubiquity of the self-study structure reinforced by independent external review demonstrated that “institutions are attempting to tie the academic review process more strongly to data collection and strategic decision making over previous continuance proposal structures” (p. 61). McGowan highlighted that an important purpose of program review is to examine feasibility, viability, and priorities; evaluate effectiveness or performance, and consider organizational dependency. Jayachandran et al. (2019) further recommended that program review include a comprehensive and thorough analysis of facilities and supports for 11 students, technical and otherwise. Studies focusing on the purpose and process of program review itself emphasize a focus on program improvement. In their qualitative study exploring faculty members’ experiences of program review at one research-intensive public university in the United States, Davis and colleagues (2020) found that the process should be improvement-focused. Similarly, McGowan (2019) recommended that program reviews be focused improving program quality and emphasized that the achievements that are evaluated and the strengths and weaknesses that are revealed steer the program towards a directional shift and produce a foundation for action. Finally, scholars have defended that program review should follow quality-assurance management principles as an “integral part of the teaching and learning process throughout the program” (Jayachandran et al., 2019, p. 59), should consider accountability, reporting, transparency, or data collection purposes; (McGowan, 2019), should be both descriptive and evaluative (Davison et al., 2009), and should include robust data collection, incorporating multiple and diverse viewpoints (Davis et al., 2020; McGowan, 2019). In addition to the elements and components of review, some scholars discuss risks associated with program review, discussed in the following section. 2.4.2 Balancing Homogenization and Differentiation Two factors discussed here that have impact on faculty members and their experiences are homogenization and differentiation through program review, and whether the benefits and quality assurance processes are internally or externally driven. Skolnik (1989), in their review of Ontario’s provincial system of program appraisal in the 1980s, highlighted the risk of homogenization within a system-wide implementation of program review that includes “a single group of connoisseurs make quality judgments for all programs” (p. 639). More recently, in a study examining documents from quality assurance agencies in multiple jurisdiction (Alberta, Australia, Austria, British Columbia, Denmark, Finland, Flanders, Florida, Germany, Ireland, the Netherlands, New Zealand, and Ontario), Skolnik (2016) explored how quality assurance systems accommodate differences between academic and applied post-secondary institutions supporting either homogenization or differentiation amongst institutions. Skolnik highlighted some features that recognize characteristic differences, such as statements of learning outcomes and qualifications for 12 faculty and found that “to a great extent, whether quality assurance systems foster differentiation or homogenization will depend largely upon actual day-to-day operations of these systems” (p. 375). For example, many review systems utilize (and thus review) statements of learning outcomes that are appropriate for traditional university systems in applied settings, contributing to homogenization and “reduction in diversity by assessing the quality of different kinds of institutions or programs by the same yardsticks” (374). The differences in impact and implication of internally versus externally driven processes have been considered by several scholars. In their large-scale review of quality assurance processes in the United States (all 50 states) and eight countries (Canada, England, Hong Kong, the Netherlands, New Zealand, and Scotland), Creamer and Janosik (1999) found that while external accreditation processes may provide better stimulus and motivation for change, internal program approval and review mechanisms “can best safeguard the institution’s autonomy, integrate the processes with the institutional self-improvement efforts, be more flexible, and boost the morale of the faculty and administrators of institutions” (p. 10). McGowan (2019) emphasized that “higher education’s adoption of continuous quality improvement practices may have had the unintended effect of isolating faculty from processes, despite an accrediting body’s efforts to expect or require their participation" (pp. 55-56). In addition to the elements and components of program review and the internal or external organization of review, some scholars have put forth recommendations, which are discussed in the following section. 2.4.3 Recommendations Some of the research reviewed provided recommendations for program review and other quality assurance processes. Common recommendations included setting clear and realistic expectations (Lock et al., 2018; Senter et al., 2020), engaging with multiple stakeholders (Lock et al., 2018; McGowan, 2019), using the internal and external collaborative resources available to reduce the time and resources required (Senter, 2020), and ensuring that personnel leading the process have sufficient resources to conduct a thorough review (Davison et al., 2009; McGowan, 2019; Senter et al., 2020). An overarching theme to each of these recommendations emphasized collaboration between departments and administration (Harlan, 2012; Lock et al., 2018; Senter et al., 13 2020). Harlan (2012), implementing a concept first introduced by Barak and Brier (1990), conducted a case study at a small, private, liberal arts college in the United States of a metareview (review of the review process) with the intent to “review, realign, and reenergize the program review system at the critical transfer from the first [review] cycle to the second” (p. 743). They found that while program review had an overall beneficial impact, there were barriers to success in the post-review stage of the process. Notably, they found that “momentum often dissipates after the site visit, and by the time the external report arrives, it receives little attention” (p. 750). They concluded that collaboration between administration of academic departments is required, to enable administrators to allocate adequate funding towards both program review and resulting improvements, and faculty to focus on areas such as curriculum that they most closely control. Setting clear expectations was highlighted as important. In their national surveybased study of Sociology Department Chairs’ perspectives on program reviews and their outcomes for departments and students at their respective institutions, Senter et al. (2020) recommended that departments be realistic in their expectations regarding what program review can and cannot accomplish. As they explained, if faculty have reason to believe that administrators at their institution will provide little assistance to departments – financial or otherwise – and will not use the extensive qualitative data and discussions that accompany program review to make decisions... then faculty should reduce the time spent on preparing program review documents, to the extent practical. (p. 13) Similarly, Lock, Hill, and Dyjur (2018), in a reflective case study, explored their own involvement in curriculum review at three levels of leadership (associate dean, course coordinator, and curriculum development specialist) and found that successful review requires personnel at various levels and commitment, and that, in order to maintain engagement, “it is important to talk about expectations, time commitment, and responsibility so that people can establish manageable and acceptable workloads” (p. 127). Both Lock et al. (2018) and McGowan (2019) drew connections between engaging multiple stakeholders at multiple levels and implementing improvements. In particular, McGowan emphasized planning processes that engage with multiple stakeholders in order to ensure that achievements, strengths, and areas of improvement are objectively evaluated and 14 reveal potential shifts in direction. Lock, similarly, recommended that reviews result in implementation of findings through actionable items (Lock et al., 2018). They concluded that “a critical component to the success of a curriculum review that nurtures a collaborative and collegial culture while ensuring high-quality, meaningful learning experiences for students is the development and sustainability of multiple levels of leadership” (p. 130). One of the most common recommendations related to ensuring that departments and department leaders have adequate time and resources allocated to the review. For instance, of the recommendations Senter et al. (2020) brought forward, several pertained to time and resources, including reducing time spent on the routine aspects of the process by engaging with student research assistants and using the collective processes available on their campuses and through their network to engage in administrative best practices. Similarly, as Davison et al. (2009) explained, it is essential that program reviews are designed to follow a timeline and “provided with the resources to meet its goals and purposes” (p. 21). McGowan summarized the recommendations presented here and the emphasis on time resources, stating that: For stakeholders wishing to improve their processes, ensuring that their data collection techniques are robust, planning processes address multiple stakeholders, and personnel has sufficient resources to conduct a thorough review, which will help ensure that those achievements are evaluated objectively and that strengths and weaknesses reveal potential for direction shift. (p. 61) Much of the existing scholarship on program review in education focuses on process and structure, and makes recommendations for departments, faculties, and institutions. The recommendations highlighted in the existing scholarship can provide lenses through which to consider the experiences of faculty members of program review today. 2.5 Faculty Perceptions and Experiences of Program Review There is a small amount of research regarding the experiences of faculty members of program review, and that which exists shows that faculty support of the process is “mixed at best” (Senter et al., 2020, p. 5). Program reviews have the potential of negative effects, such as increased anxiety, time away from teaching and research, unfulfilled expectations (Conrad & Wilson, 1985), and isolating faculty from the review process, despite efforts to engage them (McGowan, 2019). As Hoare et al. (2022) pointed out in their paper theorizing about 15 and introducing a professional learning community-based approach to program review, the process can be complex. At most institutions there are internal experts and resources available to support faculty members through the process, although faculty may have a lack of awareness of and difficulty accessing these resources. 2.5.1 Faculty Engagement The importance of faculty involvement has been long established. As Vance (1955) described, two concepts are particularly important in internally-driven quality assurance processes: “first, institutional purpose, responsibility, and opportunity for service may provide guides for judging the program and evaluating its effectiveness. Second, broad participation in the self-evaluation process is essential” (p. 28). More recently, Lucander and Christersson (2020), in the midst of a national external quality evaluation of degrees in the Swedish higher education system, conducted a study reporting on the process of engaging instructional staff and students in the development and pilot test of a quality improvement system. They found that motivating instructors in quality assurance needed to be addressed specifically, and that including instructors in the development of a process for quality assurance of assessment could ensure that the process was relevant for teaching and learning. Davison et al., (2009) summarized the importance of engagement in their paper describing a standard for California community colleges, stating that program review should be a faculty-led process, motivated by professionalism and the desire to make community college programs relevant, effective, and exemplary... A deliberative and well-planned process that is faculty driven and respected throughout the college can and will result in meaningful evaluation from which vital information can be derived for the maintenance and improvement of the integrity of the college community and its educational programs. (p. 44) While the importance is acknowledged, there exists a persistent dissatisfaction with the ability of the process to meaningfully engage faculty members (Cardoso et al., 2018; Groen, 2017; Hoare et al., 2022). Faculty members may perceive that quality assurance is a “bureaucratic process that impedes their activities” (Vettori, 2018, p. 85), “another hoop to jump through” (De Valenzuela et al., 2005, p. 2244) and that the creation of extensive program review reports, rather than directly enhancing the student experience, constitute a misuse of valuable resources and evidence a “lack of administrative resource stewardship” 16 (Senter et al., 2020). One of the factors researchers raised as impacting faculty engagement in quality assurance is their perception of having a voice in, or ownership over the process. In their case study exploring faculty perceptions of a program evaluation cycle at one large research university in the United States, De Valenzuela et al. (2005) found that “as a result of their perceptions of limited participation and voice, the participants felt rather disenchanted with and cynical about the process of program evaluation” (p. 2240). In their mixed-method and large-scale survey study of approximately 1400 academics at 16 public and private postsecondary institutions in Portugal, Cardoso et al. (2018) found that not only do academics demonstrate an unwillingness to engage in quality assurance processes, they actively withdraw from participation, particularly in externally-driven processes. They recommended that “translating” (p. 79) processes into departments’ local contexts can strengthen quality culture and make participation in quality assurance activities more appealing and effective. 2.5.2 Agency Several scholars have discussed the role of faculty members’ sense of agency in program review and other quality assurance processes. One of the underlying causes of dissatisfaction with program review may be related to a lack of agency in determining methods and criteria for assessing quality, which can result in experiencing the process as authoritarian or as an external imposition (Cardoso et al., 2018; McGowan, 2019; Skolnik, 1989). Faculty members participating in the process may perceive a lack of transparency in how self studies are created, and the data collection methods that inform their content. Davis et al. (2009) highlighted one internal reviewer’s comment, that it is “too easy for the department leader to paint a very different picture than as experienced by everyone else in the department” (p. 11). Hail and colleagues (2019) conducted a mixed-method study surveying and interviewing faculty, lecturers, and administrators in the midst of a process change in data collection and preparation for an external accreditation. They found that while the process was generally accepted and appreciated, concerns were raised amongst faculty, including agency within the process. As they state, “not only did they feel they had little voice in the decision to pursue national accreditation, but their participation was grossly unvalued" (p. 19). 17 On the other hand, when it is emphasized that program review is a group process, neither focused on individual faculty members nor their preferences, the process can encourage departments to view program review as an opportunity for reflection and improvement (Kleniewski, 2003; Senter et al., 2020; Wagenaar, 2015). As Wagenaar wrote in their personal reflection on 38 years of program reviews, “faculty should take charge of the process both within their departments as well as on campus” (p. 13). Looking at structural elements that impact faculty involvement, Barak (2007), in a longitudinal 30-year review of academic review and approvals by state coordinating and governing boards, found that a shift in responsibility from the state level to the institutional level was rationalized in part in order to enhance faculty ownership over the process, with an internal locus of review. Considering program review per se, Kleniewski (2003), in an article reflecting on their experiences with program review, described that a well-organized review system that is systematic, that includes a self-study and external visitors, and provides a link to resource allocation, can engage a faculty in planning their own future, to the benefit of everyone in the institution. Some researchers have suggested collaborative and participatory models and practices to support faculty engagement. As Groen (2017) pointed out in their paper exploring participatory approaches from the field of evaluation and their applicability to program review, increasing engagement requires a concerted effort to situate quality assurance within academic programs and to “enable a supported participatory approach will greatly contribute to more relevant assurance processes, and by consequence, quality higher education” (p. 96). Building structure around a participatory approach, Hoare et al., (2022) suggested a program review Learning Community cohort-based model, to address challenges experienced in the process by providing time to meet, access to expertise and resources, and other structural elements that reduce isolation, to “contribute to engaged program review participants, reflection of more voices, and action planning for meaningful change and continuous quality improvement” (p. 411). Davis and colleagues (2020) recommended involving faculty from other departments as Internal Peer Reviewers in the process, whereby colleagues contribute a deep understanding of the review process, and “hold institutional knowledge that can help inform the feasibility of certain recommendations” (p. 12). Similarly, considering program review in a Social Work department, Senter et al. (2020) suggested structuring program reviews in a way that involves the entire department, rather 18 than only individual faculty members. There are several factors presented in the literature to support faculty members’ agency in program review, and some concrete suggestions brought forward by researchers. Faculty engagement can be supported through ensuring that faculty members have a sense of ownership and influence in the process, including in determining of methods and criteria for assessing quality, and in creating self-study reports, engagement. Several scholars emphasize the benefits to faculty engagement in approaching review as a group process, and some suggested particular models for a team approach such as internal peer-reviewers, and structuring reviews as a learning community practice. These models suggest that collaboration has a positive impact on the faculties’ perception of program review as explored in the following section. 2.5.3 Collaboration and Coordination Collaboration amongst faculty members and across lines of leadership has also been suggested by researchers exploring program review. Hail et al. (2019) found that faculty questions the degree to which accreditation supports collaboration, and stated that ideally, the process “assists faculty in collaborating with their colleagues in making... systemic changes resulting in stronger outcomes” (p. 26) In a seven-year longitudinal study of faculty perceptions of an external accreditation process at a large non-profit university with nine campuses in California, Germaine and Spencer (2016) noted that in order to reduce faculty resistance, commitment is required from administration to accommodate the time required to undertake the process. As they stated: Efforts by administrators to include a time allowance commensurate with added tasks of accreditation will show commitment by administrators and address the concern on the part of faculty that these changes will “come and go,” thus addressing another element of faculty resistance. (p. 91) Involvement from senior administrators and decision makers can greatly impact faculty perceptions of program review and contribute to a quality of culture. Examining the implementation of quality assurance and accreditations processes in Afghanistan, Mussawy and Rossman (2018) pointed out that successful implementation requires both establishment of a culture of quality in which academic units “own the processes and outcomes” (p. 9), and engagement of key stakeholders from staff, faculty, and administrative groups, with 19 internalized quality assurance processes. Hoare and colleagues (2022) described that a structure of leadership and coordination for review processes involving both quality assurance practitioners and educational developers may best support faculty through these processes, “particularly when they work in partnership to provide wraparound institutional supports during the review process” (p. 408). In summary, the existing research on faculty experience with program review is sparse, and most indicates ambivalence at best, ranging to cynicism and disengagement. One factor that has been explored is faculty agency in the process; ensuring that faculty have ownership in the process can encourage the process to be reflective and improvement based. Specific suggestions brought forward for engaging faculty include forming Professional Learning Communities, engaging Internal Peer Reviewers, and supporting departmental ownership of aspects of the review that are within their locus of control, especially curriculum, teaching, and learning. Ensuring that institutional decision makers acknowledge the workload involved and properly resource the process is also important and can encourage a culture of quality amongst faculty and all organizational units. 2.6 Sociopolitical, Theoretical, and Colonial Contexts Program reviews take place within a larger socio-political context, influenced at procedural and structural levels by institutional cultures, state and national political priorities, globalization, neo-liberal and colonial factors, as well as explicit and implicit theoretical frameworks. 2.6.1 Underlying Theoretical Frameworks In aligning institutional and external accountability frameworks, as Hoare et al. (2022) described, “we must explore how quality is represented in the political discourse, problematize the assumptions defining program quality, and question how application of the standards impacts our communities” (p. 410). Leaving assumptions about these processes unexamined can result in quality assurance processes acting to reinforce tendencies towards conformism (Skolnik, 1989), or becoming a “tool for safeguarding and enforcing (political) interests” (Vettori, 2018). These processes also rest on theoretical frameworks, whether or not this is made explicit. Speaking about teachers (but relevant also to those driving quality assurance processes), Giroux (2006) described that although financial and time constraints can stand in 20 the way of our exploring and understanding the relevance of the theory behind our practices, engaging in an educational practice without reflecting on it only denies the fact that the practice is already informed by theoretical supposition. Borrowing from a critical policy approach, we must also reject the assumptions that the policies that guide program review can be “neutral, entirely uncommitted to and removed from interests and values” (Fischer et al., 2015, p. 1), and identify quality assurance commitments against criteria such as social justice, democracy, and empowerment. As Vettori (2018) warned, even critics of current practices can be “latently oriented at the same kind of logics they are opposing on the manifest level” (p. 98), underlying considerable influence on the framing and uses of quality assurance processes such as program reviews. 2.6.2 Accountability and Outcomes-based Funding Models The political pressures that are faced by institutions are also played out within the arena of program review and quality assurance practices in general. Program review can be at the centre of the tensions between financial pressures and quality education, placing pressures on faculty beyond improving program quality. Additionally, framing program review as a control mechanism can place participants in these processes in the uncomfortable position of enforcing audit culture, emphasizing constant assessment and quantitatively measurable outcomes over the more intangible and complex facets of program quality. As presented below, this can be countered by ensuring that educational practitioners and faculty members focus on the aspects over which they have control, and that concerns from departments are considered and addressed by decision makers. Post-secondary institutions have long balanced financial constraints with responding to demands for quality and accountability (Conrad & Wilson, 1985; Creamer, 2001; Jayachandran et al., 2019; Raffoul et al., 2023). Considering faculty members’ experiences with program review, and their engagement and agency in processes, we might consider what the underlying factors influence review processes. Higher education, as Raffoul and colleagues explained, “has faced increased pressure to prove its quality through ‘economic efficiency’ and ‘value for money’, thrusting institutions into what researchers call an ‘audit culture’” (p. 258). In some jurisdictions, there have been moves to hold institutions financially responsible for the outcomes of their students. An example of this in the United States is the 21 concept of skin in the game, exemplified by the Skin in the Game Act (S.2124 - 116th Congress (2019-2020), 2019) introduced to the United States Congress in 2019, that would require post-secondary institutions to pay 50% of student loans that default. The premise behind the Skin in the Game Act is that post-secondary institutions do not have enough stake in the employment success of their students; however, in practice, (mostly for-profit) higher education institutions respond to these skin-in-the-game incentives by creating risk-sharing or income-sharing agreements, as Collier (2019) described in their report on the cost of college in the United States. This can shift the onus onto the student regardless of their success in their programs and can result in crippling amounts of debt. This is particularly true with for-profit institutions and “boot camps”, as illustrated in Asher-Schapiro’s (2020) exposé in Harper’s Magazine, “Skin in the Game”. In Ontario, Performance Based Funding introduced in 2020 (Promoting Excellence, 2020) measures institutions’ performance across 10 metrics relating to Skills and Job Outcomes and Economic and Community Impact. In their critical policy analysis, Lawrence and Rezai-Rashti (2022), stated that these reforms represented a fundamental shift “at the expense of a more egalitarian system of social equity and critical citizenship” (p. 149). The accountability-based view consideration of program quality impacts those that guide program review. In the broader Canadian context, Raffoul et al. (2023) conducted a qualitative study exploring the impact of audit culture and the perception of data collection for accountability measures and found that while educational developers working on review processes resisted “perceiving themselves as agents of the audit culture”, they “are impacted by its hold as much as they are tasked with carrying out its mission” (p. 266). In British Columbia, where this study is set, Siedlaczek (2022) conducted a policy analysis and qualitative inquiry study, analyzing key BC reports and interviewing 12 policy makers and institutional leaders and exploring the creation of the Quality Assurance Process Audit (QAPA) policy. Siedlaczek found that the QAPA policy was well-received by senior leaders in the province as a constructive addition to quality assurance in the province, “providing clarity in expectations while recognizing institutional diversity” (p. ix), while balancing “the competing notions of accountability and autonomy, standardization and flexibility, and quality assurance and quality enhancement” (p. 292). Considering how this impacts experiences of faculty members participating in 22 program reviews Raffoul recommended that educational developers maintain a focus on improving teaching and learning, foster collaboration within the institution (including with students) and external agencies, and engage in collective action, “networking with and pooling efforts across institutions” (p. 266), while Siedlaczek observed that educational leaders needed to “address concerns from their communities about maintaining a sense of control and agency over the future of their programs” (p. 232). Researchers have explored how faculty members’ experiences of program review are impacted by the focus of program review and quality assurance practices. In many jurisdictions, quality assurance measures, accountability, and financial performance are conflated, requiring participants in quality assurance processes to navigate tensions amongst these factors. As Senter et al. (2020) described: Countering those who call most vociferously for the institutionalization of accountability systems within higher education are the critics who lament the neoliberal takeover of the university that privileges market-driven imperatives for accountability and “bean counting" ... Critics see program review as an exercise that satisfies regulators, who want a mechanism in place for overseeing programs, without providing a fulsome regard for issues of quality and program purpose (p. 4). In BC, the governmental quality audit process is positioned as balancing quality and accountability factors, and care needs to be taken by administrators to protect agency of faculty members in contributing to the future of their programs. 2.6.3 Time Considering chronopolitical aspects of program review, Vettori (2023) postulated that temporalities of academia are not separate from the indicators that govern academic life and define success in higher education. As delivery becomes increasingly dominant in postsecondary institutions, the “time at hand for discovery” (p. 2) becomes diminished, impacting the intrinsic rhythms of practices, both organizationally and individually. Vettori emphasized that internal and external quality assurance mechanisms not only bind time, but regulate and govern it, “imposing temporal norms regarding tempo, rhythm, time-spans, time-scales and time ownership on higher education institutions and the people working and learning there” (p. 10). Many of the scholarly discussions on quality assurance practices discuss pressures of time and workload as being among the challenges that faculty members experience (Conrad 23 & Wilson, 1985; Creamer & Janosik, 1999; Davison et al., 2009; Germaine & Spencer, 2016; McGowan, 2019). As Vettori (2023) pointed out, “there is a clear case to be made for more reflexivity regarding the temporalities of quality assurance in higher education and a more conscious treatment of time in all its dimensions” (p. 11). Program reviews sit within and are influenced by their socio-political and theoretical contexts and framings. The neo-political influences of quality assurance mechanisms manifest in several ways, including a consistent focus on accountability and achievement, and an emphasis on delivery over time for discovery. Looking at program review from a decolonial lens, we see that evaluation metrics often prioritize fiscal accountability over Indigenous public health metrics in health care settings (Anderson & Smylie, 2009). Some groups, such as the communities described in LaFrance and Nichols (2008) summary, have conceptualizing Indigenous Evaluation Frameworks; however, decolonial critiques of quality assurance measures require a complete questioning of the ontological frameworks within which our work in higher education exists, as discussed in the following section. 2.6.4 Colonial Structures in Quality Assurance and Program Review Quality assurance practices such as program review are often entrenched within structures, metrics, and definitions of quality that are reflections of colonial systems, emphasizing accountability over program quality improvements or community requirements, individual achievement over collective improvement, and efficiency over reflection and deliberation (Anderson & Smylie, 2009; Hoare et al., 2022; LaFrance & Nichols, 2008). In their systematic literature review inventorying health performance metrics for First Nations, Inuit, and Métis people in Canada, Anderson and Smylie (2009) highlighted that evaluation indicators are selected for the purpose of “fiduciary accountability requirements as opposed to informing public health policy or planning” (p. 5). In their work summarizing discussions of focus groups with Indigenous scientists, educators, evaluators, and cultural experts in major tribal regions around the United States, LaFrance and Nichols (2008) described an Indigenous Evaluation Framework pinned on the following core values: Indigenous knowledge creation context is critical; People of a place; Recognizing our gifts – personal sovereignty; Centrality of community and family; and Tribal sovereignty. LaFrance and Nichols stressed that “as evaluators we must continually remind ourselves of our responsibility to be comprehensive in our observations, to value subjective experience as 24 well as objective data, and to ensure that we are contributing to the health and well-being of the world” (p. 27). As Shahjahan and colleagues (2017) explored in their critique of Global University Rankings, decolonial interventions in academia are complex, and require an assertion that the structures sustaining academic knowledge production “were created to uphold a political project of universality that is inherently committed to the elimination of alternatives as a necessary condition to justify its (universal) legitimacy” (p. 11). While this study is not focused on rankings, their critique may be extrapolated to broader quality assurance processes as being “symptomatic of a much broader crisis shaking the ontological securities of modern institutions and that it is only through the loss of our satisfaction with these securities that we can start to imagine otherwise” (p. 2). In other words, and relating to the context of this study, the established ways of operating and taken-for-granted beliefs within institutions that provide a sense of legitimacy and continuity may also be upholding colonial norms, structures, and assumptions within quality assurance practices. With agency in the process, there could be potential for faculty members to critically examine and challenge these established norms and practices related to program review, participate in meaningful reflection and advocating for changes that promote inclusivity, equity, and the improvement of educational programs. Program review does not need to enforce colonial structures and reinforce racist policies. In their strategy brief on review processes at community colleges, in Illinois, Rockey and colleagues (2021) illustrated opportunities within program review to work towards closing racial equity gaps on personal (individual), interpersonal, institutional, and structural levels, and suggest that utilizing program review as an opportunity to implement anti-racist change “can serve to overcome the shortcomings ... of institutions claiming to be committed to addressing racial equity gaps but failing to demonstrate actionable steps and outcomes toward this goal” (p. 3). Considering the levels that Rockey et. al highlighted and relating them back to the context at City College, this could be implemented within program review through providing training for faculty members to recognize and challenge barriers to access and implicit bias; engage with racialized (and other equity-seeking) students, community members and faculty within program review; reviewing templates and policy to include reflection around equity, diversity, and inclusion; or initiating partnerships with host 25 First Nations and other race-conscious community partners to close opportunity gaps in accessing training and education. In their paper building on a keynote presentation at the 2018 Canadian Evaluation Society Conference, Wehipeihana (2019) provided a definition of Indigenous evaluation, arguing that Indigenous evaluation needs to be led by Indigenous peoples, and suggested strategies to support non-Indigenous evaluators to “assess their practice and explore how power is shared or not shared in evaluation with Indigenous peoples” (p. 368). Wehipeihana argued that for non-Indigenous evaluators, a paradigm shift is necessary to “disrupt their taken-for-granted assumptions of control and to radically shift the power balance by placing control in the hands of Indigenous peoples” (p. 380), which requires introspection, humility, courage, and interrogation of implicit biases. 2.7 Identified Gap in the Literature As evidenced in the above review of the literature, the scholarship on the experience of faculty members’ participating in and leading quality assurance processes is sparse. This study aims to fill a gap and add to the literature by providing an exploration in the Canadian context of an internal program review process focusing on department leaders and program coordinators at a vocational institution, across a spectrum of diploma- and certificate-level programs. 26 Chapter 3 Methodology The purpose of this study is to explore the phenomenon of program review at “City College”, a mid-sized vocational institution in British Columbia, Canada, and the experiences of department leaders and program coordinators who are leading these processes. This section outlines the methodology of the proposed project, including the positionality of the self as researcher, and provides a description and justification of the methodological decisions, and the data collection and analysis tools and methods. 3.1 Positionality: Self as Researcher I am a white first-generation Canadian on my mother’s side, many generations on my father’s, living and working on the traditional and unceded territories of the Musqueam, Squamish, and Tsleil-Waututh Coast Salish people, where I hold a leadership position at a local post-secondary institution. I grew up with the privilege of upper-middle class parents with professional vocations who valued public education and public healthcare and provided my sister and me with every opportunity to pursue our interests. My thinking and scholarship, influenced by my upbringing and opportunities to explore many and diverse interests and experiences, are underpinned by an interpretive, constructivist epistemology, and are grounded in critical reflection, which I illustrate through a brief anecdote from my first academic pursuit, mathematics. When I first started university, in the late 1990s, I studied pure mathematics at the undergraduate and graduate levels, and then transitioned into working as a math tutor for several years. It is easy to assume that mathematics is objective and quantitative by nature and situated within modernism and positivism. In the context of mathematical reasoning, Fourez (1997, as cited in Job & Schneider, 2014) described that empirical positivism asserts that “we can discover scientific laws independently from any context or project… models, notions and scientific laws exist by themselves... and are in no way models devised by humans to understand the world that surrounds them” (p. 6). Modernism in this context is defined by Gray (2022) as an autonomous body of ideas, having little or no outward reference, placing considerable emphasis on formal aspects of the work and maintaining a complicated – indeed, anxious – rather than a naïve relationship with the day to-day world. (p. 1) Gray, Job, and Schneider question this philosophical positioning of mathematics, and 27 similarly, that was not my experience of being a mathematician. Studying various fields of math – algebra, topology, set theory – we started with very few assumptions, or axioms, first principles, and built up the fields, proving increasingly difficult and abstract theorems. Anything finite was dismissed as the trivial case and only infinite cases were worth considering. One particularly influential example of this is the continuum hypothesis which states that the smallest infinite magnitude larger than countable is the continuum. Famously, Georg Cantor spent his entire career vacillating between trying to prove the hypothesis true, and then false, and back again, always finding some fatal flaw in whichever he was aiming to prove. This puzzle eventually drove Cantor insane, and a young mathematician Kurt Gödel, continuing and diverging from what Cantor had discovered, proved instead the Incompleteness Theorem. Aczel (2011) described the principle in his book The Mystery of the Aleph: No matter how careful mathematicians may be in designing a logical system of first principles on which to construct arithmetic, algebra, analysis and the rest of mathematics... the system will always contain issues that are undecidable, regardless of whether or not they are true. (p.197) This belief that came to me from mathematics is pervasive in my view of knowledge – no matter what we think we may know and how certain we are, we will always be either bound within our structures and face contradictions, or we will be self-referential. We simply cannot be separate from our viewpoint unless we acknowledge it, which inevitably involves questioning our underlying assumptions. As a math tutor, I experienced this with every student. I found that as I engaged with students who were grappling with various mathematical concepts and problems, not only my approach but also my understanding of the concepts in this field in which I considered myself an expert was shaped by every interaction with every student. The social and cultural constructs with which they arrived were intertwined with their understanding in such an intricate manner that I had to adjust my understanding in order to see the subject matter through their lenses. Another early influence on my thinking and practice, Brookfield (1998), wrote about critical reflection through four lenses; the autobiographical, the learners’ eyes, colleagues’ perceptions, and theoretical, philosophical and research literature. As Brookfield outlined: 28 Reviewing practice through these lenses makes us more aware of those submerged and unacknowledged power dynamics that infuse all practice settings. It also helps us detect hegemonic assumptions—assumptions that we think are in our own best interests but that actually work against us in the long term (p. 197). Qualitative research is interpretive, and as such, researchers should be self-reflective about their role in the research, how findings are interpreted, and how personal and political histories influence interpretation (Creswell & Creswell, 2018). In my current role in higher education, I work with program coordinators in a dozen program areas, supporting and guiding them in developing, implementing, and evaluating courses and programs. Every project that I am involved with exists within a set of assumptions which are shaped by the cultural, societal, and other factors of both the institution at which we work (which they each have in common) and of the program area (which are variably unique). As an evaluator and as a researcher, I know that it is not only useful but necessary to acknowledge that I am not able to separate myself from my own assumptions, as the program coordinators, students, instructors, employers, and other stakeholders are not able to separate themselves from theirs. Further, as an educational leader and a researcher focusing on quality assurance processes, in which I am involved on both a scholarly and professional level, I arrived at my research questions and my gaze through these lenses with preconceptions and existing biases. Through this research, I strove to keep these biases in check to prevent interference with an accurate presentation of participants’ perspectives. However, my knowledge and experience leading program review processes in higher education can also be viewed within qualitative research paradigms as an asset that increases the dependability and credibility of the study. Due to the interpretive and intrinsically subjective nature of qualitative research, I maintain my self-reflexivity through my history as a privileged settler, mathematician, tutor, educational leader and now novice researcher and describe my research methodology in the sections that follow. 3.2 Program Review at City College City College is located in British Columbia, Canada, and is a mid-sized public vocational and access-focused institution that includes academic, professional, technical, and artistic programming as well as continuing studies and access programming such as deaf and hard of hearing studies, visually impaired studies, and adult basic education. The college 29 offers approximately 50 programs, from bachelor degrees to micro-credentials, with over 90% of programming at the certificate and diploma level. As described in Chapter 2, the responsibility for ensuring that legislated quality assurance standards for higher education are upheld sits with an independent advisory board, the Degree Quality Assessment Board (DQAB), which operates under the Ministry of Post Secondary Education and Future Skills. In 2016, DQAB launched a Quality Assurance Process Audit (QAPA), as a measure to ensure that public post-secondary institutions were conducting regular and ongoing quality assessments. City College’s QAPA process was within the first three years of the process’s inception. City College, as all colleges and institutes in British Columbia, is governed under the College and Institutes Act (1996), which stipulates that a college or institute’s education council must advise the board of governors (and the board of governors must seek advice from education council) on the development of educational policy for all educational matters, including (but not limited to) the mission statement and the educational goals, objectives, strategies and priorities of the institution, evaluation or programs and services, proposals about implementation of courses or programs leading to credentials, and new non-credit programs (College and Institute Act, 1996). The quality assurance activities at City College are overseen by the education council and the standing education quality committee guided by two parallel sets of policies and procedures, one describing program review and the other describing a similar student services review process. The program review policy describes an annual review process, and a periodic fulsome review process: program review, as it is explored in this study. Program review at City College is a formative process that is forward-looking, collaborative, transparent, inclusive and consultative, engaging with faculty and instructors, staff and administrators, current and past students, industry and employers, and community representatives. Policy lays out that City College typically conducts between two and five program reviews per year, and that programs are reviewed on a five- to seven-year cycle. The process can be initiated by college administration or by the department, and a five-year schedule is maintained by the Vice President (VP) Academic’s office. Programs that are externally accredited do not need to participate in the program review process. Policy states that the 30 program review usually takes one year to complete, but that the length will vary based on capacity and size of the program. Program review is comprised of four steps, a departmental self-study, an external review, a summary report, and an action plan for implementing recommended changes. The process is led by a program review steering committee which is chaired by an instructional associate from the Teaching and Learning Centre and includes the dean of the school, the department leader, and a representative from the Institutional Research (IR) department. Other members vary by department size and may include faculty members or instructors, support staff, and other administrators as required. The departmental self-study includes sections discussing six aspects of the program: curriculum and instruction, teaching and support personnel, student outcomes, student support services, program administration, and the learning environment. The process draws upon data from a variety of provincial, institutional, departmental data sources including provincial student outcome survey data, curricular documents, program and course evaluations, financial reports, enrolment reports, labour market data, and comparable programs at other institutions. The self-study report is usually drafted by the department leader or delegate with the support of the instructional associate and is approved by the steering committee. The external review committee is recommended by the steering committee and selected by the VP Academic and generally includes three academic peers or community, industry, or employer representatives with experts in the field. The external panel reviews the self-study report; visits the site either physically or virtually, and interviews students, staff and administrators, faculty and instructors, student support services, and other external representatives; and writes a report commenting on strengths and recommendations for improvement. The steering committee seeks input from department, dean, and VP Academic and then prepares a summary report with final comments and recommendations. Accompanying the final report is an action plan that identifies key recommendations and associated initiatives for implementation, resources required, and timelines, which are submitted to the VP Academic and the education council. At City College there are common annual curriculum development funds which are dispersed by the VP Academic on the recommendation of the education quality committee, and the timelines for program reviews 31 are built around the fiscal year, so that action plans can be prepared before the funds are dispersed. This study focused on the experiences of department leaders and program coordinators going through the program review process as described above. 3.3 Qualitative Case Study Methodology The study followed a qualitative case study methodology to consider the experiences of five department leaders and program coordinators who had participated in the program review process in the past five years. In exploring the experiences and individual perceptions of faculty members, I sought to derive meaning and uncover assumptions underlying the complex and multi-faceted phenomenon of program review, focusing on agency, power dynamics, and social structures within the context of a vocational institution. Therefore, I chose qualitative methodology, for, as Creswell and Creswell (2018) discussed, it is the most appropriate approach for exploring and understanding the multiple and varied meanings that are ascribed to social or human problems by groups or individuals. Similarly, Merriam (1995) recommended qualitative research for “clarifying and understanding phenomena and situations when operative variables cannot be identified ahead of time; [and] understanding how participants perceive their roles or tasks in an organization” (p. 52). Moreover, Stake (1995) described that qualitative study “capitalized on ordinary ways of making sense” (p. 72). In a more current context, as Pino Gavidia and Adu (2022) described in their examination of storytelling through lenses of knowledge paradigms, methodologies, criteria for quality, and reflexivity, “the role as a qualitative researcher is as an intermediary in knowledge co-construction in the collection, interpretation, and revelation of the meaning behind the stories” (p. 1). The primary purpose of this study was to explore the ordinary experiences of department leaders, to uncover hidden presuppositions and power dynamics upon which the program review process and quality assurance systems have been built, to reflect on how they impact faculty members’ experiences and sense of agency in program review, and to construct new meaning going forward. Given this, a qualitative methodology was assessed as appropriate and was chosen. In considering the type of qualitative methodology to employ, I turned to Stake (1978), who, in their seminal work on case study methodology in social inquiry, highlighted that truth in fields of human affairs are best approximated by “statements that are rich with the sense of human encounter” (p. 6), and case study is best used “for adding to existing 32 experience and humanistic understanding” (p. 7). Additionally, Yin (2018) described case study as an empirical method that “investigates contemporary phenomenon (the ‘case’) within its real life context, especially when the boundaries between a phenomenon and context may not be clearly evident” (p. 15). Faculty members’ experience of program review is a contextualized contemporary phenomenon where boundaries are not immediately evident. Since the purpose of the study was to explore these experiences and, if possible, gain an understanding of the power dynamics at play and faculty members’ experience of agency through that exploration, case study was chosen as a suitable research design within qualitative methodology. The case study design chosen followed Stake (1995), who proposed intrinsic case study when the case itself is of primary centrality to the study, and instrumental case study when purpose of the research is beyond the individual case. In this study, there were five separate educational leaders describing their experiences of program review, and my aim as a researcher was to gain an understanding of quality assurance processes in general, and if possible, of the larger socio-political factors and underlying hegemonic assumptions that influence quality assurance in higher education. Stake described issues as “research questions that emphasize trade-offs and contexts” (p. 171) and stressed that in instrumental case study, “we start and end with issues dominant” (p. 16). In considering my research question, what is the experience of faculty and program coordinators leading the program review process, I considered four inter-related issues: Is program review meaningful? Is program review manageable? Can a program review process be both meaningful and manageable? Are the meaningfulness and manageability of program review in conflict with one another? In alignment with Stake’s (1995) definition of case study research, I explored my research question and the related issues through a collective, instrumental case study. In reconciling Stake’s constructivist approach with my own influence of critical reflection, I approached these four issues with an overarching focus on issues related to power dynamics and sought patterns in underlying core premises that impact faculty agency. 3.4 Narrative Analysis and Case Study Having established that qualitative case study methodology is appropriate, I turned my attention to the analytic framework. Sonday and colleagues (2020), in their methodological discussion about a study on occupational therapists in South Africa, 33 described a blended narrative and case study methodology. They defended that a narrative approach can best emphasize the voices of participants, drawing on their experience to explore the research questions while staying true to the case study methodology, which is best suited to describe the socio-political and contextual factors around the topic at hand. Whedbee (2009) in their multiple case study using narrative analysis exploring nursing graduates overcoming academic adversity, noted that through stories of participants, the use of narrative inquiry in case study could allow readers to develop understandings of how participants in a study perceive their situation in the context in which they are experienced. Sullivan (2012), introducing a dialogic approach to qualitative data analysis, asserted that lived experience is organized and best understood through narratives. Creswell (2016) described that the focus in narrative projects is to illuminates specific issues through reporting stories about individuals’ lives. It was my aim in this study to shed light on some of the factors that impact quality assurance mechanisms and, in particular, internal, formal program review through the stories of the individuals who lead these processes. Whether the salient factors would be socio-political or temporal, related to power dynamics or the vocational nature of the institution, would become apparent through the case study and the representation of the faculty members’ stories. 3.5 Participants Twenty-five current or former faculty members and program coordinators were invited to participate in the study. Each of these had been a department leader or program coordinator at the time of program review in the preceding five years. The selection of participants involved two criteria: a) the individual was or had been a faculty member or program coordinator at City College, and b) the individual had led a program through the process of formal program review at City College. The criteria excluded individuals who had not led a program through the formal program review process including faculty members, instructors, staff, and students from departments who had participated in program review but not led the process as a department leader; department leaders and program coordinators who had led a program through an informal curriculum review process or external accreditation process; and instructional associates at City College who had guided departments through the process and chaired the program review steering committees. The scope and timeline of the study dictated that the ideal number of participants 34 would be between three and five, with five as a maximum, and this was presented in the recruitment materials and ethics proposals. While initially 13 faculty members responded during the recruitment activities, exactly five signed on to participate once scheduling of interviews became imminent. The aim through recruitment was to achieve as heterogeneous and maximal variation sampling as possible, so that explorations could consider differences or nuances in experiences along gender, profession or vocation, years of experiences, and other axes of equity/inclusion /diversity/justice; however, since there were five volunteers, each one was selected regardless of identity factors. Demographic information was not collected from participants, however, there was diversity of gender, place of origin, vocational or academic background, years of experience, and race amongst participants. The participants were from a variety of program areas and departments within City College including trades, university transfer, professional development, and access or basic literacy. Invitations to participate took place in two formats. First, through a presentation at a monthly leadership meeting run through the VP Academic’s office in which department leaders, program coordinators, deans, the VP Academic, and other educational leaders at City College gathered to workshop and discuss ideas pertaining to educational leadership, and through a follow up email providing the same information as the presentation and including consent forms. While all participants were informed in writing and verbally that they could choose to leave the study at any time, no participants made that decision. Arms-length recruitment was not possible in this study, as I (as researcher) was in a dual role, as both researcher and administrator at the institution in which the study took place. As discussed in my positionality this dual role can be perceived as an advantage which can add to the trustworthiness of this research (for example, submersion as described by Merriam, 1995) or as a limitation (introducing biases, for instance) in accordance with procedures to enhance trustworthiness in qualitative research (for example, see Jones et al., 2021). I engaged in member checking to ensure confidentiality and to promote credibility, accuracy, and dependability. To achieve this, I sent each of the participants a draft of “Chapter 4 Findings: Descriptions of Faculty Members’ Experiences of program review”, requesting review, comments, edits, and if they would like to remove any quotes. Each of the participants responded and confirmed that they were comfortable with the quotes and interpretations, and one requested a few small edits, which were incorporated. 35 3.6 Setting and Introduction of Participants All of the participants worked at the same post-secondary institution and were departments leaders or program coordinators. At City College, department leaders are elected faculty members within their areas of expertise, and program coordinators in continuing education and professional development are administrators. While demographic data was not collected, the participants represented some diversity of vocational, academic, or professional area, gender, years’ experience, and race or background. Below, the college and each of the participants are introduced within the context of their departments. 3.6.1 City College City College is a mid-sized publicly funded vocational institution situated in the downtown core in an urban centre with deep colonial-historical roots, being the oldest postsecondary institution in its jurisdictional region in North America. It has operated as City College since the mid-1960s, with its predecessors providing vocational and academic training on one of the campuses since the late 19th century. The programming at City College includes academic, professional, technical, trades, and artistic programming as well as access programming such as English as an additional language, deaf and hard of hearing, visually impaired studies, and basic literacy, numeracy, and computer skills. Five faculty members and program coordinators volunteered to participate in the study, which included a one-on-one semi-structured interview and a focus group with all five. All of the participants had been at City College between seven to ten years. Each of the participants and the programs that they represent are introduced below to illustrate the context and setting in which the project takes place. 3.6.2 Jamie Jamie is the department leader in an academic upgrading and university transfer department at City College. The programs that Jamie’s department provide offer university and college-level academic upgrading and academic transfer courses for students with highschool experience that are looking to upgrade skills to access university study or career programs that require university-level academic activities in humanities, science and mathematics. Jamie collaborated with Kira and a third department leader in their program review, which involved three separate departments and approximately seven overlapping programs, including associate degrees and entry pathway programs. Jamie’s department is 36 quite large with approximately 30 full and part-time faculty members. 3.6.3 Kira Kira works closely with Jamie and is also a department leader in an academic upgrading and university transfer department at City College overseeing programs that offer university and college-level academic upgrading and academic transfer courses and programs. Both Jamie and Kira’s programs struggle with understanding exactly how many students are in their programs, as many students in their departments declare a major when they enrol at City College and then shift focus without declaring this to the institution. Kira’s department is slightly smaller than Jamie’s, with approximately 10 full and part-time faculty members. 3.6.4 Px007 Px007 is a program coordinator in the continuing education and professional development department at City College. Px007 participated as a department leader in two program reviews, both in part-time evening/weekend career-development certificate-level programs aimed at working professionals or those wanting to get into a new career path. The continuing education and professional development department at City College is revenuegenerating and offers both non-credit courses and short programs, and credentialed certificates and diplomas. The credentials within Px007’s department are subject to the same quality assurance measures as programs in the other departments at City College. 3.6.5 Seth Seth is a department leader of a developmental program at City College that offers tuition-free upgrading in basic math, computer, and literacy skills for adults. Seth’s program has a long and rich history at City College and is one of the only such programs that has been offered continually in the jurisdiction in which City College operates. Seth’s department has approximately a dozen faculty members. Over the past decade, funding to the programs in Seth’s area suffered a significant loss, both at the federal and provincial levels, which resulted in many layoffs. 3.6.6 Shane Shane is the department leader of a very small, specialized trades training program, one of the longest running programs at City College. Shane’s department runs one certificate program with fewer than 40 students per year, and less than two full-time faculty members, 37 including Shane as department leader. The niche program that Shane runs is very small, limited by space constraints within highly specialized lab classrooms. A summary of the participants, their department, and additional context are provided in Table 1. Table 1 Summary of Participants and Programs Participant Department Additional Program Review Context Jamie University transfer / academic upgrading Kira University transfer / academic upgrading Kira and Jamie collaborated in their program reviews with a third department leader. Three departments underwent program review together, reviewing seven programs. The departments are large, with Kira’s department housing approximately ten faculty members, and Jamie’s department housing approximately 30 faculty members. There are approximately 700 students annually in their combined departments. Px007 Continuing education Px007’s department houses multiple programs, and and professional they went through program review twice, two development years apart. Both programs were career / professional development programs within the revenue-generating continuing education and professional development department. Seth Literacy and adult education Seth’s department offers tuition-free basic education and upgrading in math, science, and English. Over the decade prior to the program review that Seth undertook, they experienced a significant loss of funding, resulting in faculty layoffs and program discontinuations affecting a large proportion of faculty. Shane Specialized trades program Shane’s program is one of the oldest programs at the college and is highly specialized and unique within North America. The program is very small with only two faculty members (one of whom is a part-time auxiliary instructor) and fewer than 40 students per year. 38 3.7 Data Collection, Preparation, and Analysis Data collection tools included semi-structured interviews and focus groups, offering robust opportunities for participants to share their experiences. The questions roughly followed the list below, and as the study was qualitative and the interviews were semistructured they included prompts and follow-up questions, adjusted in real-time to give more agency to the participants’ voices: • Please describe your experience of leading your program through program review. • Is this process important? Why? For whom? • What work was involved? Who helped you with the process? What resources did you access? • What did you learn going through the process? About the program, about yourself? • How would you describe the overall experience? • Looking back, is there anything that you know that you wish you knew when you were starting out? • Do you have any recommendations that you would share about the process? Data was collected via audio- or audio-visual recording depending on whether the interviews were in-person or online. Two in-person interviews were recorded on an iPhone, and three online interviews utilized the institution’s enterprise Microsoft (MS) Teams account. Recordings were saved onto the Thompson Rivers University (TRU) OneDrive and on a local device (a non-institution-owned laptop). The data for the two in-person interviews were transcribed directly (i.e., not using audio-to-text software) and, since MS Teams automatically transcribed the recordings, the three online interviews were transcribed starting with the recording transcript. The focus group was hybrid, with four participants in a room on campus and one joining online through MS Teams. The transcripts were then stripped of names and identifying information, with notes and coding sheets using pseudonyms, and a single pseudonym sheet was saved for reference. During the study and writing of the thesis, the recordings, transcripts, notes, coding sheets with pseudonyms, and pseudonym sheet were stored in a password protected OneDrive folder on an institutional (TRU) enterprise Microsoft platform. In the interviews and focus groups, anonymity was not possible, as every participant 39 was already acquainted with one another and with me as researcher. The introduction to the interviews and focus group included a discussion of this, a reminder for participants to be mindful of what they shared, and that confidentiality would be maintained using pseudonyms without identifiable information. Confidentiality as well as trustworthiness were further ensured through member checking that involved sharing the findings with participants for review, validation, and accuracy prior to submission. The raw data will be destroyed seven years after final use either upon completion of this study or upon refinement of the more robust study questions for potential future studies. Transcripts from interviews and focus groups were analysed using a narrative analysis approach, and following Creswell’s (2015) steps for qualitative data analysis: preparing and organizing data, exploring, and coding, and building descriptions and themes. In preparing and organizing, the interview and focus group data were transcribed directly, with notation following an abbreviated version of the Jefferson system as described by Sullivan (2012), who explains that some discursive markers are useful guides to the emotional register “for the purposes of examining subjectivity and emotion” (p. 69). The symbols used in transcription were as follows, as they appear in Jefferson (p. 69) and presented in Table 2. Table 2 Abbreviated Jefferson Transcription Symbols Symbol ((swallow)) CAPITALS () _____ Connotation Additional comments from the transcriber in double paratheses, e.g., about features of context or delivery. Capitals mark speech that is emphatic. Empty parentheses signify inaudible talk. Underlined words signify stress in tone. Exploration and coding the data was emergent, using key moments and sound bites (Sullivan, 2012), or short, concise, and impactful excerpts or quotes extracted from interviews and focus groups, to develop codes and seek patterns and applied a mix of direct interpretation and categorical aggregation. Guided by Stake’s (1995) analytical approach for instrumental case study, the experiences of each participant were carefully examined and converging experiences were presented under the same category. First, I transcribed the interviews and highlighted the key moments and sound bites. Then, as patterns emerged, I began creating categories and sub-categories and “assigning” 40 them to key moments. Considering a) the categories that had already emerged and b) my initial research questions, I wrote sample questions for the focus group. The initial codes that emerged are listed in Table 2. Table 3 Initial Themes Themes Time and workload Project structure and process Resources and supports Faculty perception Purpose and impact Partners Data Impact lenses Subthemes Timeframe Faculty Release Time management Training, support and coordination Structure of project Templates Leadership Budget Agency Reputation / reality Faculty buy-in Underlying emotions Influence strategic planning Engagement and feedback Critical and analytic reflection Propose and implement program changes Advocate for resources Assess program impact and feasibility External Internal Data collection methods Disaggregated data EDI Indigenization and decolonization The focus group questions were: • Is program review a manageable process? • What are some tools and resources that help it to be manageable? • What are some of the pressures that result in it being less-than- or un-manageable? • Is program review a meaningful process? • What is the value of the process, and to whom? • What are some most meaningful aspects of program review? • What are some of the barriers to review being as meaningful as it could be? 41 • What recommendations would you share for City College? For the sector at large? Once the focus group was completed and transcribed, I identified key moments and sound bites (key ideas, opinions, or emotions expressed by participants) and incorporated these into the key moments document with the categories that had emerged. Eventually, through categorization and re-categorization, I arrived at four themes, and 18 sub-themes, which are outlined and described in Chapter 4. 3.8 Ethics Approval The project was submitted for ethics review and approval from both the City College (pseudonym) Research Ethics Board on February 23rd, 2023, and Thompson Rivers University Research Ethics Board on February 27th, 2023. Both proposals required minimal revision, which were submitted on March 23rd, 2023 to City College and on April 14th, 2023 to Thompson Rivers University. Approval was received from City College Research Ethics Board on April 6th, 2023, and from Thompson Rivers University’s Research Ethics Board on April 19th, 2023 (see Appendix 1 and Appendix 2). 42 Chapter 4 Findings: Descriptions of Faculty Members’ Experiences of Program Review 4.1 Introduction This chapter introduces the stories that participants told through the one-on-one interviews and the focus group. The participants’ narratives are presented with a first level of analysis, into four themes which emerged through transcription and identification of key moments and sound bites: purpose and impact, project structure and process, time and workload, and power and relational dynamics. Each of the themes is explored with an emphasis on the participants’ voices and the aspects of their shared experiences that were emphasized through their dialogues. 4.2 Themes Through analysis of the interviews and focus group, four major themes emerged, each with several sub-themes. A numerical representation of the themes and sub-themes identified and the number of times that they were mentioned within the Key Moments is presented in Table 4. It must be noted that the count below is not an indication of importance or significance. Additionally, most of the participant comments that were identified as Key Moments related to multiple themes and sub-themes. The sections that follow explore each of the themes and sub-themes listed above, drawing from the stories that participants shared and the themes that emerged from reading and analyzing those stories. A common thread throughout all of the themes and stories is whether and how the project is meaningful and manageable. 4.2.1 Purpose and Impact Many of the stories that participants told involved the purpose and impact of program review. The five most common discussion points were reflecting both critically and analytically about departments and programs, engaging in meaningful feedback with internal and external community members, advocating for departmental resources (including but not limited to curriculum development funds), proposing and implementing program changes, influencing strategic plans, addressing equity and access issues, reflecting on Indigenization and decolonization, and assessing program viability. This section considers each of those main sub-themes and highlights some of the stories that participants told and the discussions they had. 43 Table 4 Themes and Sub-themes Theme Purpose and impact Project Structure and Process Time and Workload Power and Relation Dynamics Sub-theme Critical and analytic reflection Influence on institutional strategic planning Engagement and feedback Program improvements Advocacy for resources Assess program impact and feasibility Equity Impact, Indigenization, and Decolonization Training, support, and coordination Resources and supports Data Leadership and oversight Reputation of Program Review Timeframe Release time Faculty consensus and commitment Agency # of Key Moments 59 13 11 16 20 9 4 8 45 17 10 14 4 3 27 14 15 23 9 15 4.2.1.1 Critical and Analytic Reflection. Ideally, program review is a reflective activity that affords department leaders the opportunity to step outside of daily operations and teaching to evaluate the program from different vantage points and through new lenses, as participants described. Some participants found that they made new discoveries about their programs, and others were able to validate assumptions that they had made or inherited. Some participants used the program review as an opportunity to reflect on, and document, the history of the program where no record previously existed. Several participants spoke of the unique opportunity to take a step back and away from daily business and engage in active reflection, as Jamie stated, “it's very easy to get really bogged down in the details and the admin work of the job, and so it was really valuable to step back and look at the big picture of what are we doing? What are we trying to do?” (Jamie 3). Jamie also described that this opportunity to reflect deeply on the program, while a welcome opportunity, can also put departments in a vulnerable position: 44 I think that there's huge value in stepping back and looking at programs as a whole... I think all of us... spend a lot of time in the details of any particular course or program and we don't kind of step back and say does this program still make sense?... I don't think there's a lot of opportunity to do that because our jobs are busy and because it's quite scary to sort of just say to the dean, you know, I don't think this program makes a lot of sense, what should we do...There's no other time to do that. (Jamie 6) Similarly, Shane described the benefit, and the implications, of considering the program from a big picture perspective, in the wider institution: It's quite enjoyable, I mean, knowing I have a big better picture of…. not only our program, but also... our program within the college. And because we're such an old program, lots of this stuff is not really up to... [it’s] not as detailed as what required by today’s standard. (Shane 6) Some participants described that the reflective nature of the review expanded their holistic understanding of their programs. As Seth described, they “have more tools for thinking analytically and reflectively about the different aspects of our department” (Seth 10). Participants spoke of the opportunity to consider the programs through different lenses, both critically and analytically, in a way that they had not done prior to the review, as Px007 described: certainly it was an opportunity to introspect and to... look at the program through different lenses, through different perspectives, from instructors, from students, from externals. [There] was data that I was exposed to during the running of the programs but when you compile all of that, when you dissect it, when you have analyzed it, it gives you...a different style of thinking about the program. Here are some of the strengths of this program, and here are some gaps that we need to address. (Focus 20) There can be particular value in learning to consider operational aspects of their program alongside curriculum and pedagogy, and in carrying that knowledge forward as Seth went on to describe: It was my first time going through something like a program review, so it was useful for me just to learn process-wise how that works and I think that I carry those sort of structures in my mind now as I go forward working in the department... it's common to kind of zero in on... curriculum and instruction and I hardly ever think about 45 money stuff for example and so something like a program review really forces you to think about all the different components of your department almost as equals, in a way. (Seth 10) In the process of this reflection, some participants expressed their appreciation in being able to validate assumptions that they had about the program. This was key for Shane, who was able to prove what they knew to be true, that their program served as an entry- and continuation-point for City College students. As Shane illustrated: Shane: I think it's great. Because, we are so entrenched in teaching our class and running the program, so we get [an] outsider to take a fresh look at our program and... we see lots of like different data... the data supports our thoughts... for example that lots of the students... take our course and move on to another program in the college.... we always had a feeling of that... but during the review... our research team, gather all the data and ... we have the facts. We do have a big percentage of students continue with the other program. Researcher: So you've got some... intuitive knowledge about the program and then the review was able to support that through data. Shane: Yeah....because that's just our... our feeling ((emphasis)), but... now we have the hard data proving that. (Shane 1) As Shane went on to emphasize, “now we have the... the proof that ((clap)) it did happen... lots of student are taking more than one program here. So, I have a better understanding ((clap again)) of... the students in the program.” (Focus 21) Some participants also reflected on the value in looking at the historical aspects of their programs and departments. As Seth outlined, they enjoyed the opportunity to go back to the roots of the program: I really liked digging through some of the historical records that were in my office ... 40 years worth of... typewritten documents, and, I never get time to go through what's sitting on all around me, which is like an archive for our department... and I only really just skimmed the surface because I didn't really have time to do a deep dive into everything that was in there. But ... being able to go through, old documents and learn about how we used to do things compared to how we do things now and understanding ... some of the roots behind why we do what we do. (Seth 6) 46 For Shane, there was great value in creating an historical record of their program, given that none existed, and in making updates to the program while honouring the history. As they emphasized: [One] thing about the review is that I [found out] a record of the program... evolving, how it evolved. Our program is ... almost as old as City College, like, 47 years. And luckily... I learned under the original program leader who started the program 47 years ago. So, I know the whole... history. And when we do review, we find out the school doesn’t have much record... we kind of fill out the plan, like storyteller, you know, like, oh it was like that... that's the reason why we started the program, [we] fill out all the blanks... So now we have a record of like how the program evolved... Because back then, the target student. You know, that it's different than, you know over the years, it is changing, the student, you know... it's different. But I was surprised that the school didn’t have a record. Right? Researcher: [So you] took this opportunity to create a historical record, Shane: Yeah and luckily the person, [the] two of us, have been around, so ...we know the whole history. (Focus 25) It is also important to note that some participants highlighted a gap between the potential of program review and what it is given time and resource constraints, as Seth highlighted: There's... the value of what it could be. And there's the value of like what it actually is, right? ...I think it’s so important that programs do have time to discuss, kind of, from more of a bird's eye view... escape the daily minutiae, the day-to-day details, and really take time as a group to look at what it is we do. Why we do it, and where do we want to go from here? And that could be a very kind of motivating... growth-fuelled kind of space to be in... I imagine [if there were] a retreat element of it or something where we had a whole day for the faculty to come together and have these deep conversations, I think that could be really awesome. (Seth 14) Participants described their commitment to program review as an opportunity to reflect critically about and to consider their programs analytically, as well as their challenges in doing so. In looking at their programs through these critical, analytic, and in some cases historical lenses, participants drew connections between their own local program reviews, 47 and the larger institutional strategic plans and initiatives. 4.2.1.2 Influence on Institutional Strategic Plans. Participants spoke of the action planning phase as one of the most fulfilling and fun aspects of the process, and one of the most frustrating. Program review has the potential to influence institution-wide strategic planning, or to at least inform department-level strategic plans; however, this was not always the story that participants told. In some instances, there were institutional plans that simply did not take findings of the program reviews into account, and in other instances, carefully crafted recommendations that participants brought forward were neither accepted nor denied, but simply ignored. This left some participants with a sense of alienation, and a desire for connection to institutional priorities and initiatives. Crafting recommendations was depicted as an enjoyable and captivating exercise for some participants. For example, as Jamie described, “things come to the forefront when you're like, let's start making recommendations. Ohh my... I can't stop writing recommendations” (Jamie). However, most participants also conveyed a disconnect between the recommendation and action plans in their program reviews and the plans developed and implemented in the institution-at-large. Seth detailed this dynamic, and a feeling that “there is a plan to do something drastically different with [the department], and I am curious how much of that is based on the review.” They expanded to predict that “I bet none of it is based on the review, whatever they have in mind... well, then what's the point?” (Seth 12) Similarly, Kira and Jamie discussed the experience of having recommendations and action plans ignored: Jamie: I don't think we got a no, it was just... shouting into the void. Kira: Yeah it wasn’t even ...“no, we can't do that” it was,“ohh, that's nice”....at best... as an entire report. Jamie: It existed... (inaudible) Kira: I think I would have loved it if they said OK, so we can't do the facilities thing that you asked for, but we're going to add it to the campus plan and then you know, 5 to 8 years out, maybe we'll think about it... that would have been more than what we got. Jamie: Yes, some feedback from above... other than... oh, great job, that's nice... but, no meaningful... yeah, we really like this one. This one is, I'm sorry no, we're never 48 going to get an improvement in your marketing... meaningful feedback, would have made it feel like there was a point... to some of this. Kira: Meaningful connection to other campus initiatives or college initiatives would have been huge. (Focus 23) Some participants suggested having “better parameters around what was likely to be a feasible recommendation” that could have some connection to an institutional plan, or “another reason to do project #36 on our strategic plan... I didn't see that connection” (Kira 12). Most participants did not have previous program reviews or action plans to revisit, and some suggested a mechanism by which the plans could be evaluated some time down the line to assess the efficacy and highlight themes across programs. As Px007 explained, an evaluation cycle might “make it worthwhile, to have it on the record and for the college to come back and evaluate. Here is the program review, what are some of the barriers or challenges you faced in achieving your action plan items?” In this way, Px007 went on to convey, departments may be able to address what action items they were not able to achieve, “so that gets captured in. And then if 10 different programs at the college have that, and they're all saying marketing, marketing, marketing, then we know... that needs to get more attention or more funding at the college” (Focus 30). Seth also brought up the question of departmental strategic planning and program review, and identified a disconnect also at that level. Similarly to Px007, they introduced the idea of a wrap-around evaluation for program reviews and the action plans: Seth: it raised lots of interesting questions for me about...why is it called program review versus strategic planning for departments?...would it feel different for the department if we approached it more as like we're coming up with our strategic plan? Because, then,... there's a plan in place and then you evaluate it, whereas it felt like we were skipping right into evaluation almost... it's almost like, there there should be like a wrap around after a year or something where you come back and... and check in rather than just be like, OK, phew! ((brushing hands together)) we did the program review... I never have to think about that again. (Seth 11) Making recommendations and action plans was described by participants as one of the most important aspects of program review. Some participants spoke about frustration 49 around what they perceived as a disconnect between the program review and action planning process, and institutional planning. In considering the review process and what recommendations to put forward, participants described meaningful and fulsome engagement with the many parties affected by review as one of their most important pillars, as is described in the following section. 4.2.1.3 Engagement and Feedback. Participants spoke about meaningful engagement as one of the most important purposes of program review, for gathering feedback, illustrating strengths of the program to interested parties, and building connections with internal and external partners. Program review provides a unique opportunity for departments to showcase their program to industry and community, administration, and colleagues from within and outside the institution. Program review also, ideally, provides space and resources to collect deep and fulsome feedback from students and graduates, and to engage in impactful discussion with community, and employers. Many of the participants described the opportunity, and necessity, to highlight the value of their programs to various parties. Px007 discussed this as a chance to connect “with external partners or subject experts to also, in a way, showcase. It was also a matter of pride, here is what we do at our institution.” Another benefit of these connections, as Px007 presented, is to “exchange ideas with them, they bring in different perspective” (Px007 5). For Seth, hearing the value that Px007 found in connecting with the external panel helped them to see the value in the connection in their own area, even though the recommendations brought forward might not work in the context of City College. Px007: [It] just raises the profile of the college when you invite people from outside and show them what you have running at the college, how students go through these training programs..., and, even though it was just a snippet... there's that value to foster those relationships, those connections with... colleagues at sister institutions,... And it's also an opportunity to share the struggles, right? So they might tell you: OK, but here is what we do at our institution, or here is what the industry needs, and then you know that, well here is what we are doing and here is a gap. Or here are some challenges or barriers we are facing, and how can we address those in our in our programs? Seth: Yeah, I think that the external review piece, just in terms of building 50 relationships... it's so rare that we get to invite external people to come in and see what we're doing, and it is really kind of uplifting for the morale to have them say: wow, this is really incredible. [When] I think about the external review, I used to think of it as... they gave us all these recommendations that actually don't make sense for all kinds of reasons. But listening to you talk made me realize that even if their recommendations were not all that useful… Having them there sort of served a different purpose, which was... we had people from the... School Board come to City College and... it would be great if we could be a lot more aligned with them. This gave us an opportunity to actually, invite them in when we otherwise wouldn't have had that opportunity, and same as someone from [a large research university in the region] came in and somebody from [another college in the region], ...it was an opportunity to, kind of, involve community. (Focus 28) Participants also described that there are unique benefits of the reflections from external reviewers. As Kira discussed, the external review can support assumptions held by the program and challenge them: I think it was meaningful to get reflections from the external reviewers and see how they saw us. In particular, one thing that surprised us in a way and not in some others, was that (pause) department leader release had been an issue that some of the department leaders had been raising throughout, that this was a key recommendation for them, to increase, and the externals did not reflect that. So they said that the amount of release our department leaders had was consistent with or even generous compared to anything they'd experienced... (Kira 8) Additionally, there can be some vulnerability involved in the external review, as Jamie elaborated, “yeah, [it was] kind of fun. It was some anxiety about... all of this dirty laundry. Now we're going to show to people from other schools.” Similarly to Px007 above, Jamie noted the particular value in getting feedback from external reviewers in that they could offer “here are a couple things where we said ‘we don't really know how to solve this problem, but... maybe this is a solution’ and they said, well, here's how we do it at our institution and it works” (Jamie 10). Participants also discussed the importance of meaningfully engaging with students and graduates, and the complexities in doing it. As Px007 discussed, these connections can 51 be quite meaningful, “[I] remember, one graduate was quite appreciative of being invited… and was quite appreciative of having their voice being heard” Px007 expanded on this, elaborating that: oftentimes, you know, once we take a course or a program of study at an organization or post-secondary, there might be a survey that comes along, but seldom do we get a chance to really express how we felt, because, I think people innately feel the need to be heard and this kind of process, program review, gives that opportunity to graduates. So that kind of outlet for the graduates to come back, engage, to provide their feedback, I think, is also valuable. (Px007 6) Seth also emphasized the importance of engaging with students, and highlighted some challenges, particularly in a literacy program, where engaging with students in education can already be challenging, like having people show up on time and... and be present [is] a bigger ask that involves more barriers than it might for other departments. So, we had to have pizza, for example at ours ((laughter in room)) because we might not get anyone [to] show up” (Focus 9). Similarly, program review provides a unique opportunity to engage meaningfully with community, industry and employers, which can be particularly important in a vocational institution such as City College. As Px007 described, it is important to gather information from “people who are in the industry, who are in representing employers and so on because ultimately to me, that's the... success of the program is how well the students can do once they graduate” (Focus 8). As Px007 explained, there are some unique challenges in gathering meaningful information from community, industry, and employers: There were times where, yes, I was able to engage with them to a good degree, and efficiently... to get the... the feedback quickly... at the right time. But at other times there were some barriers that made the process a little bit less manageable. [...] One example is just sheer availability... and also some of them were very open about expressing what they thought. Others, not so much. (Focus 8) Every participant spoke of their engagement with impacted parties, including students, graduates, colleagues, instructors, employers, administration, and government ministries. Participants spoke about their successes in engagement, their strategies and methods for collecting meaningful feedback, and in some cases, their challenges or barriers 52 in doing so. In most cases, the commonly held goal in gathering this feedback was to effect positive change in their departments, to the benefit of each of the impacted groups, with students at the centre. 4.2.1.4 Program Improvements. In addition to reflecting on their programs, participants brought up the value in maintaining program currency and in being able to propose and implement changes to curriculum and pedagogy, and to other elements of their program. A major theme was the importance of and satisfaction in making action plans that are realistic, have potential to result in meaningful change, and represent solutions to problems that exist or arise. Some barriers to creating realistic action plans were also discussed. These included a lack of institutional commitment or resources, taking faculty desires into account, and creating action items with enough specificity to be implement. In general terms, program review can be an opportunity to make program improvements and to solve problems. For example, as Kira described, “as a department lead, I would say that a significant portion of the job satisfaction comes from preventing and solving problems for the department, whether for faculty or students. So, the best parts of program review are looking at problems and finding ways to make them smaller problems or not problems at all” (Kira 11). Kira continued to state that one of the greatest values in the program review was in hearing from faculty about what improvements need to be made, and contributing to solving those programs: ... when they can be engaged and when there is a meaningful chance to be heard about the things that have been bothering them [there is value in saying] okay, can we take a higher level look at that and maybe make an improvement. So for me... when I was talking about the physical learning environment, hearing from the faculty what they found was a problem with the spaces they're teaching in, and, with the online learning environment, people who commonly use Moodle were saying, you know, this part really bothers me... Why don't we have anything that is modern for collaboration? So hearing those sorts of reflections that for me was some of the most meaningful faculty engagement. (Kira 11) Program review can also be an opportunity for department leaders to learn about curriculum and program design, and to use this learning to improve curriculum and instruction, as in the case of Px007, who highlighted that they “learned some of the very 53 interesting techniques, strategies, methodologies that are quite common in the education sector… the application of Bloom’s Taxonomy, for example, that was something which I don't think I was familiar with before my first program review.” They were then able to “ensure that the program is successful, right? What are some of those that make for a good program and... and then working our way backward with that to find the right information that can help us” (Px007 4). For changes to come from program review, action plans need to be achievable, which some participant emphasized. As Px007 outlined, the biggest value is that the plan “can be implemented. It's realistic... it's not something that becomes too onerous to really put it into practice to a fair degree... maybe not 100%, but to a fair degree” (Px007 5). Seth described a prioritization activity during action planning that could support in making implementable plans, however rushed it may have been: ... they framed it in terms of “these are all the things we want to do. What do you rank as important and as urgent?”... I liked thinking about decision making in terms of importance and urgency, so I appreciated that, but it was also a bit arbitrary... “How important is that? How urgent is it”? ((snapping fingers))... we kind of just plowed through each thing and gave it a rating. (Seth 5) Most participants, however, spoke about the barriers to creating action plans that were realistic and achievable. As Px007 discussed, “not the large majority, but a few of the recommendations in action planning were a bit out of the scope of what we could do” (Focus 13) Px007 went on to express a mix of frustration and relief in including college-wide recommendations: That was something which [would be] challenging... just implement by myself... it is more of a college-wide, kind of, thing... But it was a bit of relief also... I know I cannot do this. (Focus 13) Another key tension around implementation of action plans and recommendations for participants was around consensus-building. As some participants illustrated, in order to engage faculty members and gain consensus, action items were intentionally vague. This can hamper implementation, as Seth conveyed: One of our recommendations was to improve our [course] curriculum ((laughter in the room))... So now... we have some curriculum development funding to do that, but... 54 that’s so open-ended that it’s really not useful... where are the resources to actually dig into.. what’s this going to look like? The amount of curriculum development funds we got is quite small and then we have to do the work, but, philosophically or big picture... what does that mean? (Focus 12) For some programs, creation of a realistic action plan was hampered by facilitators and external reviewers misunderstanding the unique needs of their programs. In highly specialized fields, having external reviewers from industry is necessary; yet, they may not understand the nuances of teaching and running training programs. As Shane detailed: Shane: … other programs are more general. Although... the external reviewer [is] in industry already, but they are…. also different... [their] position is more of a business owner, you know.... As the business owner and for us, we are training the [trades people]. There’s a... our position is a bit different Researcher: There's a different perspective. Shane: Yeah. Yeah. Different yeah, point of view, yeah. So you know, that’s… Why some of the action plan suggestion? Yes,... they are valid, but in our situation, our course is very short... I mean, we have to be careful, are we able to do that? Researcher: Right. Shane: [So] I said, I cannot meet all these… that they give, you know? (Shane 9) It is important to note that some participants acknowledged that even when the action plan cannot be implemented fully, it can still be a useful tool. As Shane explained, “I constantly refer back to the report, I mean, and then see, what I have to edit … Like yesterday... so I was looking at both, like referring back...can I implement that, you know, in there?” (Shane 3). Additionally, and as mentioned above, some participants suggested a mid-cycle, or after-review evaluation process to assess the implementation of action plans after some time, and to capture interdepartmental themes that point to larger institutional considerations regarding implementing needed changes and improvements, as Px007 specified: Some action items that weren't really within the scope or... or weren't really implementable just by ourselves. I think,... one of the things that needs to happen is, is an evaluation phase after the program review is complete, after the curriculum development is complete, is to... for the college to go back and... and evaluate. ... like 55 Kira was saying earlier, it wasn't worthwhile … to spend so much time and energy to come up with that recommendation, only to realize that, well, we can't really take action on that ourselves... Why we weren't able to achieve those? That is an opportunity for us to go back and say... here is one or two action items that I was not able to achieve because of such and such reason. (Focus 30) Participants discussed that the satisfaction they experienced through program review was in identifying, describing, proposing, and implementing program improvements. The program review gave them a unique opportunity to do so, even among challenges and frustrations. In many cases, program review was additionally portrayed as a tool to be able to access resources to make these necessary adjustments and changes, as explored in the next section. 4.2.1.5 Advocacy for Resources. Program review can provide a unique opportunity to access resources for programs. Participants characterized the review and action plan as both an opportunity to identify issues and bring them forward, and a tool with which to advocate for resources to address those issues. Participants also highlighted challenges in accessing and requesting resources, such as lack of clarity around curriculum development funding application and parameters. Some participants described a realization that a hidden implicit purpose of the program review was to justify requests for resources and expressed a desire for transparency. As some participants explained, program review affords a unique opportunity to highlight concerns that are otherwise ignored or worked around. As Jamie stated, “it was an opportunity to take stock and... put in one place, all of the things that... I thought should be changed... and that other people have been complaining about” (Jamie 2). Similarly, Kira outlined the benefit of putting aside time to deal with known issues: [Without the review] we would not necessarily have had the structural value of... [looking] at the student experience as a whole or... the workload as a whole... being in this program, or teaching in this program, and what resources are needed, and that, I think, has a direct reflection on the quality of the experience both for employee or for a student. So for instance, the value in saying these are some limitations of our physical spaces and some potential strengths that we are not leveraging helps us say: here's a way that we can make our program better... so it's that deliberate space to say 56 “this has been building up for a bit, it's time to deal with this problem.” (Kira 10) Kira also explored the benefit of accessing and revisiting former requests for resources in the context of the review: I found it really helpful to work with [a staff member who was] not part of the program review process in in terms of the steering committee or in terms of a big formal involvement. But, [the staff member had] a bunch of recommendations that had already been written about the [learning] environments. And those were very detailed and very helpful in making recommendations for the [lab] space updates that are… needed and to describe the constraints of working with the existing [lab] space... I think that the reason that she had this info already written and ready to go was for previous capital requests and related requests just for [lab space] updates, because it's been a known issue for some time. (Kira 3) Further to bringing previously known issues forward, participants expressed appreciation in the ability to access resources to improve the issues that had been identified. Seth and Px007 discussed the value of having the action plan to reference to make desperately required facilities upgrades, for the benefit of students: Seth: I think the most useful part of it was being able to point to it and say “give us money because our report said this”. So, for example, there was end of fiscal money that [the dean] let us know that existed. And so one of our recommendations was we needed laptops dedicated to be loaned out to our students. And so, we used our recommendations to ask for end of fiscal money and we got it. We got a bank of 10 laptops. So that was pretty awesome, and I'm going to try again... when capital requests are due next time around, I'll ask for tables and chairs that are nicer and don't have nails sticking into people's butts or that kind of thing. ((laughter)) I'm going to go for it... It's like kind of a utilitarian, just way to get, other resources. Researcher: And... who benefits from that utilitarian... Seth: I think that students do for sure. Yeah. Px007: Yeah, something similar happened to my program review as well. The first one that I led was IT and we were running a lab that had not seen any computer upgrades since 2004 or 5. All: ((shocked laughter)) 57 Px007: Something like that… and this is an IT program. All: ((laughter around the room)) Px007: So as Seth said, right, we got that as a recommendation to upgrade the labs to a more modern lab that students can use, and now you can point out, well, here is the recommendation. So here is the funding that is linked to that directly. (Focus 24) Most participants, additionally, discussed challenges in accessing resources to make changes, particularly in applying for adjudicated curriculum development funds. Jamie, Shane, Px007, and Kira presented two separate and interconnected issues around curriculum development funding. First, although the program review timelines were set to align with curriculum development proposals, the proposals themselves were perceived as undersupported in time and guidance. Second, the program changes that were recommended through the review did not fit easily into the curriculum development (CD) proposal framework: Kira:...and I think for us too, one of the areas where perhaps some support and information was missing was around the parameters for CD funding, because so much of our action items were not directly around improving a particular course or set of courses or even a Program Curriculum Guide. It was more around other elements of program quality and... we were able to get enough support to see how it fits... in an application... because we had [a] kind of lack of… follow through and, frankly, support and information and transparency from Admin. It felt very bizarre. (Focus 18) Additionally, the structure of the curriculum development funding applications was not always conducive to accurately describing the work that needs to be done which resulted in confusion in writing proposals: Shane: It's just kind of like... let’s just make it up, the CD proposals, right? ((laughter and nodding in the room)) I had the same experience... I don't know how it goes, I mean... Jamie: Yeah, how may hours is that going to take... Shane: I mean, some of them are so big... I don't detail step-by-step, like, half-an-hour on each step or level.. Or whatever the point is... it’s so big, right? So... we just kind of make up the stuff... for the sake of doing it already. Jamie: Yeah, had it been my first CD proposal, I would have really freaked out... but 58 having done some before, you kind of know, okay, I'm going to throw in some times and, they're going to cut it down by 40 to 60%. So go ahead and overshoot those numbers a bit ... there was really no help other than Kira... there wasn't additional support. So if we hadn’t known what we were doing, it would have been a lot more stressful. Shane: ... and it seems like it's just a set amount... for program review and CD funds, it’s like around this number... So it doesn't matter... how much I ask for ... usually it's around this... (Focus 19) Some participants also identified or speculated about a hidden, implicit purpose of the program review, separate from those explicitly stated. As Jamie elaborated, it may have felt less incongruous to have known about the dual purposes: I think it would have been helpful to, at the start, have a sense of what exactly the purpose of the review was... It was sort of given to us as the purpose is you were going to create these documents. The purpose is to arrive at the completed report, and... even if they had said, you know... this is all just leading up to figuring out exactly what we want to put in that CD proposal and putting in the CD proposal, and that's the real goal... I sort of knew in my mind that the secret goal was to create things that you can later point to and ask for money for... you might as well explicitly say: the point of this is to figure out how we want to spend money, and to put it into a permanent... official record, so that you can point to it later and say, well, in our program review, we determined this was a priority. So please give us money for this. (Focus 27) Participants explained that one of the benefits of program review was the unique opportunity to advocate for resources more effectively than they were able to in their day-today operations. They also described frustration around how those requests were supported and organized. The question of an implicit, hidden purpose of program review was discussed. This gives rise to broader questions around program review as an activity of assessment, rather than an evaluation, as discussed in the following section. 4.2.1.6 Assessment of Program Impact and Feasibility. In addition to accessing resources, participants raised the concept of program review as a surveillance tool, or a mechanism by which to assess program impact and feasibility. In addressing impact and 59 sustainability, participants discussed accountability: to what, and for whom? In some departments, the value of program review in assessing financial feasibility and impact to industry and employers was explicit and accepted, as is Px007’s case, working in a continuing education and professional development setting: I've always been driven by the goal to... to have the right program for the right audience to meet the right needs of industry. And right can be quite subjective, but right in the sense, basically making sure it aligns with the organization's goals and supports the vision... The renewed program can be… a net benefit both to the students, which is basically the primary… partner that we have in our institution. As well as to the industry and, and of course, if the students are gaining the right set of skills and the right knowledge and developing those. Then obviously the industry will... benefit. And I use the term industry because that's more applicable to my area, but you know other program areas may call it by different names. It's more like workplace or labour market... Well, there's obviously the financial value as well. Which is a healthy program, a healthy, up-to-date program is definitely going to lead to the success of the organization, of the department at the very least. (Px007 5) and (Px007 6) In other departments, such as the developmental area in which Seth works, program review was experienced by faculty members as a surveillance tool, which negatively impacted the potential of the review as a reflective activity: ...because my program has been shrunk quite drastically over the past 10 years, I think there just tends to be a lot of... paranoia or might not even be paranoia... might actually be... legit suspicion about does the college want us to stay around? So I think a lot of faculty were uncertain about, is program review a euphemism for... an evaluation of whether or not we're worthy of continuing to exist... I think that that sort of cast, or it had a big impact over the rest of the experience of the program review... a lot of the review felt like we were... trying to ((pause)) explain to external people, to the department, what it is we do, how we do it, why we do it that way and why we're worthwhile, which is kind of a different thing from what is probably the intention of a program review, which is more inward, like kind of looking... a more meaningful deep self assessment of what are our strengths and what are our areas that we could 60 grow in. It was almost, it's almost risky in the environment that we're working in to look at growth areas,... because it's sort of like saying these are our weaknesses and we are worried about maybe potentially being eliminated completely. (Seth 2) Seth expanded upon the importance of program review as an accountability tool, but questioned to whom the accountability should be considered: ...the idea behind the process is really crucial and it's a part of the accountability too, and, I think... our main accountability should be to our students but I think our accountability was definitely... trying to make ourselves look acceptable to administration, and it would be really cool if we could really focus a lot less on what does admin want, what does government want, and really focus on... what do students want, and put most of the resources around that. I think that would completely change the process. (Seth 15) There was some discussion about program review in the larger, sectoral context, which Kira acknowledged as a broader and higher-level accountability exercise: Some of the value of this work is checkboxes for ministry reporting, that we can say “these programs have gone through this review this recently and it was this deep.” And so it's check-boxing for that reason that's a higher, much higher-level ask for money and a much higher-level “look, we're doing a good job” (Kira). Some participants questioned the purpose of program review as a surveillance tool, while others accepted the function to assess the financial state of their programs and direct impact to employers. Most participants agreed that a purpose of review is to demonstrate accountability, whether that be to students, the institution, or government. While assessing impact and accountability, participants also discussed assessing equity impact and contribution to Indigenization and decolonization, which is discussed in the following section. 4.2.1.7 Equity Impact, Indigenization, and Decolonization. Many participants discussed issues around equity impact and barriers to access and expressed a motivation to include these considerations in their program review. In some cases, they perceived a lack of support in inclusion of equity impact analysis and considerations of barriers to access on equity-deserving groups into their program reviews and action planning. Additionally, some participants raised Indigenization and decolonization as a theme that was absent from or 61 unsupported in the program review process. In reflecting on the value of program review, some participants drew connections between reflecting on making program improvements and improving access and equity in their programs. As Px007 described, “here is how students can benefit better, or how we can... create a more equitable program or to create more accessible program, to create program that has more value for the students" (Focus 20). Some participants were explicit about addressing issues of equity, diversity, and inclusion (EDI), and included an equity impact assessment which was not part of the templates or resources provided, as Jamie detailed: Kira and I sat down with the committees’ whole list of recommendations and we... worked on prioritizing them... we decided that we were going to try to assess EDI impact of our various recommendations... so we sat down and kind of looked at every single recommendation that we made... and said... does this help further EDI in some way? If not, is it important anyways? Because there are other reasons to do things... are the impacts... major and widespread? Are they minor, isolated? (Jamie 4) Kira expanded on this and the unexpected resistance that they experienced: It was, it was not part of the template... and we didn't have the capacity to do a fullblown equity impact assessment or to do any major equity lens on the project where we would have liked to, but, we did think it was important to, at the very least, when we came up with our priorities, look at them through that lens, and so for the amount of time that that took, it was I think very worthwhile... I want to add though..., that I was surprised that it was a little bit contentious that we did that when we brought it back to the steering committee. There were some concerns about whether it was appropriate, and whether it would open us up to risks in the sense of trying to take on work that we weren't quite ready for... there were two different perspectives. The Teaching and Learning Centre perspective was a little bit more about, institutionally, we don't have a lens or a process for this, so have you done appropriate consultation about this lens that you picked and have you done... have you checked that it's okay with everybody. And from leadership it was more of a risk management concern about is this the appropriate time for us to be taking on these sorts of initiatives? (Kira 5) 62 Similarly, Seth described noticing that there was nothing in the review templates about equity, diversity, and inclusion, and bringing recommendations forward to the Teaching and Learning Centre, including a focus on disaggregated data: I came up with like a bunch of questions that I had wished were in the template that were... related to EDI and … we use those... appendices for our program review to encourage... social justice analysis or a gender-based analysis plus (GBA+) kind of analysis through the work that we were doing, and then I gave them to Teaching and Learning Centre and I was like, please consider putting these into the template that you give people.... and they did include stuff around disaggregated data... I think I kind of learned about the importance of institutional readiness around collecting disaggregated data as part of this process because it helped me understand why you can't just do that and without, like, Institutional Research having the skill set around how to collect and manage that kind of data in an ethical way, and, it sort of just pointed out the gaps to me within City College that we would need to fill to put that capacity in place in order to be able to collect disaggregated data. (Seth 16) Seth later expanded upon how the availability of disaggregated data could help program reviews in identifying barriers to access that students face: If you think about students, experiences of programs, often we get one numbers... 70% were satisfied, 25% were dissatisfied, and, disaggregated data would be where, ((pause)) in addition to having all of that, those big summary numbers, you'd have the ability to break it down and say, okay, in terms of who is successful, were there certain people... with certain identity factors who are more successful? … are there groups that are... underserved or might be experiencing barriers in the program that we had not really recognized? And then if you have that data, you could say, okay, as part of our program review we want to make sure that we're really supporting whatever group your data shows… or is experiencing barriers to the same level of success. It could be in terms of participating in the program or succeeding in the program, or … feelings of how they enjoyed the program. Those are things that tend to be impacted by the identities that we bring … It's kind of new, so we don't really have infrastructure in place to do a good job… of collecting that kind of data and I think that is… the intersection that City College is at right now, that kind of messy, 63 figuring it out phase. (Focus 1) Kira also brought forward recommendations to have equity, diversity and inclusion built into the resources and templates and considered in programming and curriculum. They recommended: ...having explicit EDI and Indigenization considerations built-in throughout... a framework that a department should engage with or would be encouraged to engage with, and even develop as they go forward, so that when you're looking at aspects of curriculum, quality, program delivery and so forth explicit considerations around impacts to different equity seeking groups and impacts to Indigenous students, staff, faculty, et cetera. If those were more clearly built in as well as thinking about how to engage with not just impacts to Indigenous community members, but how do we decolonize what we're doing... it is timely to put that thread in even though we are all beginners at it... have you considered how this would affect different groups such as, and then naming some equity deserving groups... according to gender or according to language ability or whatever. That could be just a useful mental check, because I think we're at different places in terms of our habits about stopping and thinking about our answers to questions in diverse ways, and then around decolonization, I think ... having ways to build in explicit reflection on how formal and how colonized some of departmental processes could be, and if there are ways to challenge that... I think [prior learning recognition] might be a good entryway for some programs to look at, for instance. (Kira 13) Participants considered program review as an opportunity to reflect on, and in some cases, assess issues of equity, diversity, inclusion, Indigenization and decolonization and barriers to access in their programs. Some participants described taking initiative to include equity-impact analyses and including considerations around Indigenization and decolonization and faced resistance in presenting that analysis. The most prevalent theme with participants was the purpose and impact of program review. The opportunity for departments to reflect on their programs, seek out meaningful feedback from multiple groups, advocate for needed resources, and influence institutional priorities were described as being highly important and contributing to the satisfaction of the project. Participants brought up some frustrations which they perceived as both logistical and 64 political. Some of the successes and struggles in program review were described by participants in terms of the structure of the review project, as is described in the following section. 4.2.2 Project Structure and Process Many of the discussion points emphasized by participants were centred around the structure and process of the program review project itself. Because the project is so large, and because most department leaders, when going through the process, are doing so for the first time, training and supports were highlighted as vital for the success of the review. Similarly, the resources available to faculty members, both human resources such as the Instructional Associates from the Teaching and Learning Centre, and the physical resources that they provide, were often mentioned. Data was a significant point of discussion, including the quality of data available, challenges and successes in accessing meaningful and consultative data, and absence of disaggregated data to assess impact and barriers to access. Leadership and oversight of the process and project were discussed by all participants, with emphasis on both the supportive and difficult aspects of how the projects are governed and overseen. Finally, the reputation of program review versus how it was actually experienced was raised by a number of participants. These five themes are considered in the following section and illustrated through stories and discussions by participants. 4.2.2.1 Training, Support, and Coordination. Three sub-themes emerged within the context of training, support, and project coordination. Participants discussed the value of learning about the process by going through it, acknowledging that the task can be overwhelming and amorphous at first. Related to this, the opportunity to learn about project management in an educational context was important to some participants. At City College, the Centre for Teaching and Learning and the role of the Instructional Associates working for the centre are central to the process, as was highlighted by every participant. Some tensions around oversight and leadership of the process are discussed further below in the Leadership and Oversight section. Participants discussed learning about the program review process as being uniquely valuable to their understanding of their program in general, as Px007 described, reflecting on the overall process: ...from a personal perspective, the value was to ... have a better different insight of the 65 program review process, to understand, and also in the future, to apply some of those best practices in building a curriculum... what are those building blocks?” (Px007 5) While learning about the process was presented as enjoyable and beneficial to the process, training resources and guidance are required. The process could be challenging to orient to, and as a result can be overwhelming at first. Shane depicted how they felt at the beginning of the process: So far so good. I mean... we have the Teaching and Learning Centre... we don’t know what the path is... we are so busy in daily operation... I have to teach classes through the day so without extra help by a guide we really don’t know… what you do... What’s the idea, what’s the goal, you know? Because, I’m not familiar with the process at all, so you know, like going in blind... it’s time to start the review, and whoah... and then... the person from the Teaching and Learning Centre can come here and then... they can guide me through step-by-step... at the beginning, I didn’t know who can help. I was just all by myself... all the documents, and what am I going to do here? (Focus 2) Some participants reflected on project management, highlighting that component as a learning moment. As Px007 explained, the review provided “my first venture into project management, but more in the educational setting. Not knowing where this is going to end up... but knowing what the goal is, taking the project from ((pause)) beginning to the end was quite insightful” (Px007 4). Px007 recounted how they used tools and techniques from project management to keep the project running smoothly: I think one thing that really helped was having timeline of fixed deliverables set out right from the beginning... what's due at what date and, and even though the dates weren’t set in stone, but at least I have some milestones that I could track. So that is something that I would say... certainly have to pace the progress of the review. (Focus 10) Px007 also expressed appreciation for their learning about time management within the project, both in managing the whole project, and in terms of managing time within meetings: How to manage time that was another big piece... when I'm working independently on the program review as well as... during the focus group, the workshops, the meetings, because, there's only limited time we have to extract all that meaningful information 66 that we want from the attendees. (Px007 4) They highlighted the latter in more detail: I think one thing that I realize, that it's really important to identify the scope... and the objective very clearly, because the conversations can digress in all kinds of different directions, and then you realize that your two hours of time has pretty much vaporized like that. So you need to be very, very clear on how you're going to make the most of those meetings with your instructors and your, your partners, external or internal ((pause)). And maybe have a Plan B... if conversation starts going in a different direction, how you bring it back, or how to keep whole agenda on track. (Focus 32) The Instructional Associates also played a role in project management. As Kira explained, they “helped keep a pace going and [otherwise] that would have been challenging” (Kira 1). They emphasized that “I would say that I found the process to be decently well structured and having the support from Teaching and Learning Centre helps make it go and go on a schedule.” (Kira 9) In addition to the department leaders and the instructional associates, some participants’ departments were able to second a faculty member to project-manage the process which participants described as invaluable. Kira illustrated “how meaningful it could be to have the support of a dedicated person with release to help manage the project. That was huge, huge, huge. It was beneficial beyond just having somebody check in and say ‘how is it going’” (Kira 7). Kira elaborated that internal project management and a team approach helped the department: [A faculty member] project managing us and... and adding lots of comments throughout, and so people were responding to comments and there was sort of an asynchronous conversation going on as we developed the document. But also we had one or two meetings where we would go over the document and talk through it, and, look at the drafts and see what needed to be added, changed and so forth... I think each section, although written by a single individual, often reflects more voices from the steering committee as well... It was good to be involved in the different sections. [Kira 4] Because the work involved in program review is so far outside of the scope of regular work of some department leaders, and due to the challenges with secondment discussed 67 above, at times the Instructional Associate takes on a more central role in the departmental work of crafting the self-study, as Shane presented: Shane: She really guided us through.. I think she she did majority of the work. Researcher: [Yeah, that's what I was wondering.] Shane: [We just provide the answer.] Researcher: Did she? Who… who wrote the report? ... do you think that she wrote the report? Shane: She, she wrote most of it. Researcher: Got it. Shane: We just kind of put in the stuff that we have to change. R: So she did the writing and then you gave feedback. Shane: Yes, yes, yes. Right. Although we say we did it together, but basically, she's the one who is basically doing it. (Shane 2) Reflecting on training, support, and coordination of program review, participants discussed the intrinsic value of learning about the review process as they went through it, which for some was overwhelming. Participants also described the project managementaspects of the program review, and the role of the Teaching and Learning Centre as guides, project managers, and in some case report writers, through the process. In addition to training and coordination, resources required to do the program review was explored as important to the success of the project, as discussed in the next section. 4.2.2.2 Resources and Supports. All participants discussed the resources required for a program review. Even in the best-case scenario, the process is intensive, requiring time, people, and tools. Participants raised the question of whether the resources provided were adequate to complete the project successfully and meaningfully, considering both the human resources and the practical resources such as templates and training. Budget and financial resources provided and required were also discussed, in the context of compensation for external partners and subject-matter experts, and faculty time allocation, which is discussed in more depth in the Time and Workload section. Some participants described a team-based approach, and one participant suggested sector-wide resources, which is also explored below. Participants discussed whether the resources provided were sufficient, and in large part agreed that the resources provided were helpful and necessary. Px007 reported that the 68 availability of resources shifted over the year and impacted feasibility: ... a lot of it also depends on the resources and the resources means internal and external sources of support from Instructional Associates, from our instructors, faculty members... the degree to which I was able to support the program review, it varied through the year, because, there are certain times of the year where it was very difficult. It was challenging because of the nature of the existing program that we have to support and manage and run. And other times where there is a little bit more available time for us to, more flex time rather, to dedicate to program review. It was, it felt a lot more manageable, right? So it kind of is like an ebb and flow situation. (Focus 7) Similarly, as Seth conveyed, the question of whether or not the resources provided are adequate to complete the project is directly related to the time allocated to the project, and that given the timeframe they were working with, “we did have an impressive amount of resources kind of crammed into that short period of time, and I'm grateful that we did have people from all these different departments and the dean was there. I think that's really fantastic” (Seth 9). Financial resources and the budgets provided were also raised by participants. As discussed in more depth below in the Time and Workload section, the budget allocated to the project, in some cases, was not adequate to release a faculty member, particularly in a small department, which, as Shane revealed, added some stress and pressure to the project: Researcher: So how did you use that [budget]? ...what were you able to support with those resources? Shane: Not much. It's just, for us, right, because all our time is spent in the classroom, already used up all our budget there, right?...we always have the meeting after the class... class finishes at 1:00 and then we will have meeting at 2:00 to 3:00 or something like that. Px007 expanded on this and discussed the financial resources available in terms of ability to effectively engage employers and alumni, in particular: Were [the resources] sufficient to… to complete the project? I would say so, for the most part. Could the project have benefited from additional resources? Yes, most likely. … A little bit more comfortable budget to work with would have definitely 69 have helped. That would have probably, led to more… engagement from graduates and from... our industry partners. (Px007 2) The program review in Kira and Jamie’s departments employed a team-based approach, which, as Kira highlighted, made the project manageable: However, it was useful that we were working together in a larger group with two other departments, and that we had an Instructional Associate helping project manage, because, it just helped keep a pace going and that would have been challenging, I think, if I was trying to manage it alone. So that was very valuable, just keeping things manageable and working along and having other colleagues working on various facets of review kind of in parallel was also useful just in keeping the process going, keeping the energy up... and the ability to work together and bounce ideas off of each other and help prioritize together was also really valuable. The steering committee sat down and broke up by section, basically, assigning work to different people or in a couple of cases, different pairs of people to work on, and that was useful because that way we could just divide and conquer... (Kira 2) Resources such as templates and past program review reports were also discussed as an important starting point to help department leaders orient to the process. Jamie described the benefit of having a template to work from: Jamie: The Teaching and Learning Centre has templates and they provided us with both the template for the self study and then a full copy of, I think, everything that another department had done for a recent program review, so we could have sort of a model to look at as well. Researcher: Was that helpful? Jamie: Oh yeah, very helpful and gave us... a good idea of what we were expected to do... ((long pause)) Uh, and yeah, the task feels quite amorphous at first, so having the template and the example really helped to clarify what it was that we were doing. (Jamie 2) One participant, Px007, suggested building sector-wide supports, such as bestpractice documents that could be shared amongst institutions: Although I'm quite fortunate and…. quite glad in the support that I received from my organization, I think having some kind of more, larger ((pause)) set of… resources 70 across institutions. For example, I'm thinking of like a provincial… body that incorporates and exchanges some of the best practices in program review. Just like we have [a provincial working group in educational technology] for example. Something to that nature, but maybe not that extensive in the beginning, right?... [That] could be, I think, definitely advantageous to almost any institution that is going through the process, whether it's smaller or larger one, and that could also be as a forum... for people… to come and exchange and discuss ideas, once again, in best practices, or even… as a development tool. (Px007 8) Program review was described by most as resource-intensive, particularly in terms of time and work resources. Participants discussed the financial resources available and how they used them, for faculty time or engaging with community. Some participants also reflected on the usefulness of the templates and documents available to them, and suggested sector-wide resource coordination. One of the most important resources in the program review is the data available to departments, and how thorough and meaningful it is, as discussed in the following section. 4.2.2.3 Data. Every participant mentioned data collection and analysis; sources of data, how the data are collected and analyzed, and quality of data were common points of discussion. In some departments, the data available or collected was meaningful, and allowed the department to validate assumptions that they had about students. For others, meaningful data was neither available, nor able to be collected due to system constraints. Several participants emphasized the importance of qualitative data and meaningful consultation through focus groups which was challenging given the timeframe. Some departments took the opportunity to access archival records and data or information that had been previously collected for different purposes in order to create more robust data sets. Finally, participants discussed the importance of, and lack of access to, disaggregated data, to fully understand the barriers that some students face and the factors that lead to their success. For some participants, the data available through the program review helped them to understand their student population, particularly around transfer patterns, as Shane relayed: While I was in the program we knew that, you know, some of our students are taking more than one program.... at City College... you know, they would get our program and then they go to [another program at City College]... We have a feel of it. You 71 know we... see some students around and we know that, but we don’t have hard data. Doing this process, right, doing the research that we have hard data, how many students actually in the past five years... after our program, are they taking other City College program or before, are they coming from other City College programs, now we have the hard data... (Focus 21) In other cases, participants conveyed limitations in accessing meaningful student data because of the setup of the enrolment tracking at City College. As Kira explicated, in a large department with a lot of student mobility, “we have such poor student tracking information that we actually don't know necessarily who is in what program in, in terms of what they intend to graduate from" (Kira 6). Kira expanded on this and the impact on meaningfulness of the program review: Students have to register in a program when they register at City College, but sometimes they change intent and start registering in courses for a different stream and we don't know it... I don't necessarily know how many people are [in a particular program or transfer pathway] for instance. I don't have that info. There's not a count, and... even if everybody was accurately in the major code... our [enrolment system] skills or our access can be so decentralized that it's some work to figure out who needs to progress into what course, in what time... So that's a limitation of our review where, if we'd had better tracking, better information, and better reporting tools, we probably could have taken it a lot further. (Kira 6) Participants discussed the importance of utilizing multiple methods, including qualitative methods for gathering feedback from students, graduates, faculty, and industry and employer representatives. As Px007 outlined, there were logistical barriers to collecting this rich data from external partners, as “they have a lot of useful information, but not everybody speaks out in the... group meetings... So you need to either find ways to do a oneto-one interview with them for example, or through a survey, things like that” (Focus 8). In some cases, collection and analysis of data raised discussions of data literacy and readiness at City College, and led to tensions between the departments and the central Institutional Research (IR) department, as Seth illustrated: [The] most loaded data that we had came from Institutional Research and it had to do with the use of in-progress grades versus satisfactory grades, and, we definitely felt a 72 lot of heavy scrutiny about the degree... of students who get in-progress, grades versus get to move on to the next level after one term... the data that IR had didn't really make sense in terms of how things work in real life. And so there was... a lot of trying to explain to them how things work so that they could actually grab data that that was meaningful. (Seth 6) Seth expands upon this tension and the importance of using qualitative methods such as focus groups in their department, a literacy program: [There was] a lot of emphasis on quantitative data from IR as opposed to qualitative data and it does take a lot of time to collect that kind of qualitative data like we only did one, maybe two focus groups.... but that probably would have been some of the richest data if we could have spent more time ((pause)) capturing more of those ideas verbally from students. Especially because we're a literacy department... There were barriers, just, for students to be able to do that kind of survey, so ideally, tons of focus groups would have been the best way to collect data from the students about how they feel about our program... (Seth 7) Some participants outlined the value in accessing information that they had been collecting or storing for some time. In some instances, participants had been keeping track of suggestions, ideas and complaints, which they were able to include in the review, as Jamie revealed: Part of it was also amalgamating a lot of comments that have been made over the over the years from faculty, as well as, sort of, thoughts that I had about things that could be improved, that have just sort of been on my, you know, back of my mind, to-do lists ... So it was an opportunity to take stock and sort of put in one place. All of the things that both that I thought should be changed, and that other people have been complaining about...(Jamie 1) On the other hand, some participants found a lack of historical data available. Shane expressed surprise at the lack of a historical record: I was surprised that the school didn’t have a record of all this stuff, you know? And, I want to say... ((big audible breath out)) it’s important knowing the history... during the review the some of the suggestions that we've heard... we know the whole history and say, hey, we’ve done that before but it doesn't work... So we don't have to. You know, 73 I would tell them, ok, this suggestion... we did it before and then that’s what happens... we can actually tell the result already, because it's been done before. (Focus 26) A final theme that arose within data collection and analysis was the access, or lack of access, to disaggregated data. As Seth described, “it's one thing to understand students’ experiences and student progression, but it's another to be able to kind of see how that information changes based on students’ identity factors” (Seth 7) They suggested recommendations that could lead capacity building in the college’s review practices: I ended up kind of writing a bunch of questions out about what kind of data would be nice, even though we wouldn't have it just for the Teaching and Learning Centre to take away to think about, like, can City College build capacity around collecting disaggregated data? (Seth 6) Ideally, program reviews are built on evidence, and the data and information that make up that evidence are of central importance for the faculty members engaged in these processes. Participants expressed both appreciation and frustration with the data available to them, how it was collected, how meaning was drawn from it, and how representative of the program and students it was. In some cases, these conversations around data led to tensions between faculty members and centralized college departments which is discussed more broadly in the following section. 4.2.2.4 Leadership and Oversight. At City College, there is no centralized quality assurance or similar office and as such, the Teaching and Learning Centre provides coordination and leadership for departments going through review. City College has a standing educational quality committee as part of the governance structure, and that committee is responsible for high-level oversight of program quality assurance including program review. Leadership and oversight of the process was raised by many participants. Most expressed appreciation for the support of the Centre for Teaching and Learning, and some also expressed concern or discomfort around their role in oversight, particularly in how decisions around deadlines are made, managed, and communicated. Participant discussions also highlighted inconsistencies in oversight, particularly with respect to priorities, deadlines, and outcomes. Most participants expressed appreciation for the guidance and leadership of the 74 Centre for Teaching and Learning. Having instructional associates involved as project managers “was very valuable, just keeping things manageable” (Kira 1). Particularly for department leaders going through such a process for the first time, having support from the instructional associates alongside experienced faculty members made these projects possible, as Shane emphasized: I just took over the department this year. So, it’s lucky the previous program leader is now the auxiliary, so... it is easier doing this process because at least he can help me, he can also guide me too. But he hasn't done review himself.... either, so, both of us are going in blind, but at least the daily operations, he knows it well. Although I’m new as the program leader, I have another guide. I have two guides guiding me right now. So... I'm lucky that way. But as I said... both of us haven't done it. You know, so without that extra person from Teaching and Learning Centre, you know, we don’t know what to do. (Focus 3) As Shane illustrated above, and as mentioned previously, the timelines of the program review are built around curriculum development timelines, and one of the implicit and explicit purposes of program review is to support applications for these funds. At the same time, some departments experienced a gap in oversight when it came to the actual proposals. Jamie and Kira, working on the same program review, recounted this: Jamie: We didn't make it clear who was going do the CD proposal in... our meetings, and, then, the day before it was due, maybe the day it was due, I was meeting with the dean, at, say like, 3:00PM, and as I was walking out of her office, she was like... who was doing the CD proposal?... So I did it. Kira threw some notes in and I finished it at 1:00 o'clock in the morning. So that was not optimal. Researcher: So that had, like, that was just... Jamie: Just fell through the cracks. Researcher: Just fell through the cracks, right. Jamie: Yeah, everyone thought someone else was doing it and... Kira: Yeah, my recollection was that we didn't talk enough about it to even know the full scope of what was going in that proposal and why, and, so it was really a ball dropped... and the whole point of the deadline. (Focus 17) As Kira elaborated, “the review was good up to a point... and then there's an information cliff 75 and we were just scrambling at the last minute" (Focus 18). Px007 reflected on this as well: I think in hindsight what would have certainly helped with the CD funds proposal would be if the instructional associate was involved in that process of writing the proposal... they were involved … in the program review process, they had... insights on what's going on with actions that are coming from it, and then, take it from there to pitch your proposal.... I think that with their expertise it would have been a lot smoother and better processing sort of doing it all-nighter on the CD fund proposal… to have them co-create that proposal would probably be a better process. (Focus 19) As Kira summed up, “it sounds like inconsistent oversight, like very strict timelines that turned out to not be helpful or meaningful. And then at the end, no real follow through” (Kira). Leadership and oversight are important in any large project, and participants described this with both appreciation and frustration. Participants addressed the central role and value of the instructional associates working in the Teaching and Learning Centre, as guides and project managers, acknowledging that the projects would not be possible without their leadership. Frustrations expressed around leadership in general were around seemingly arbitrary and non-responsive deadlines, and lack of follow through in creating proposals for curriculum development immediately following the reviews. Participants also discussed their experience of program review compared to their expectations, which is discussed in the following section. 4.2.2.5 Reputation of Program Review. Participants discussed the reputation of the review process, and what they had heard about or expected from the process as they set out. In most instances, there was acknowledgement that although the process included challenges, it was easier to manage and more meaningful than participants had expected it would be before they began. In some instances, there was a shift in the perception of experience as the process went along. For some participants, the reputation that preceded program review was quite negative. As Jamie recalled, “what I had sort of heard was, like, look out, program review was coming for you at some point, you know not the day or the hour... and it really was not nearly that bad” (Jamie). Kira described this as well, acknowledging that while the Review was time consuming, it was manageable, partly due to the structures in place to support the 76 process: So even though for me I found it was a real struggle to manage review around everything else I was surprised that it was not as overwhelming as the rumour mill had it. Before entering review, I would hear things like, it's the worst, it takes over everything and you're not going to figure out even how you're going to manage that year. I didn't find it that bad, so that was nice. I did find it to be... more than I had capacity to do... in a deep way in some… portions, meaning that... when I hit my limit, I had to just trust that the rest of the team was carrying that through. So that's... it's good, I felt that it was good that I had that team support. (Kira 7) ... I did find it to be time consuming and at some point stressful, but I found those stressful points to be relatively limited in that frequency and duration... overall, the workload, there was enough check in points that we could reflect when things were going too quickly or when more time was needed... I would say... it's not as overwhelming as what I had heard and perhaps that is because it was supported by such a large group that there was several people in the steering committee... I know that some City College departments are so small that functionally one or two people are handling the review. (Kira 9) Shane, working in a very small department as a new department leader, highlighted that at the outset of the process, it seemed very overwhelming, and support from the Centre for Teaching and Learning was instrumental in going through the process. As Shane recounted: We didn't know what to expect. You know, we only know there would be lots of work… It's already better than we expected because it’s supposed to be... our department... writing all those reports and stuff... [the instructional associate] notices us, that all we have very small department that we don’t have the resources. So she took the extra work you know to help us to finish that, ((pause)) So it's already better than we expected... Because at the very beginning... we are the ones who are responsible, our department writing all the report and the paperwork... So that part is you know, like offloaded back to [the instructional associate]… really help us out a lot a lot. (Shane 8) For some participants, what helped the program review be more manageable, and 77 more meaningful, was advocating for adjustments to the process to fit the unique needs of their departments, as Jamie depicted: I've talked to a few people who have sort of said, you know. “Oh my gosh, I have to do program review next year...” I've said to them is it's not as bad as it's sounds... You can choose to take the opportunity to really look at how your program can be better.... and, where it is and isn't working at a higher level... and, I have also said, if something really doesn't make sense for your program, just push back on it. Because you can, and the whole point of program review is to make your program better. So it's not to shove your square peg into a round hole... if it doesn't work ((pause)) don't panic.... it's a good process... it's not punishment. (Jamie 9) As illustrated above, participants described a discrepancy between the reputation and the reality of program review. Most participants went into the review process having heard rumours and stories about the degree to which it was difficult, even painful, and most found the process better than expected. Participants described suggestions that they would give to colleagues that were starting out in the process, including relying on the supports available, and questioning the process and templates if they were not a good fit for the program. This section has included topics that participants raised in relation to the structure and processes involved in the project of program review. Since program review is a large and complex project, having adequate support in place, including training, coordination, facilitation, project management as well as practical resources such as data that is appropriate to the program area and students, templates, and reference documents can be very impactful to the experience of faculty members, particularly those going through the process for the first time. Also central to the faculty experience as described by participants is time and workload, as discussed in the following section. 4.2.3 Time and Workload Several of the stories that participants relayed involved time and workload, particularly the timeframe of the project, and the release time allocated to the program. Many participants found the timeframe awkward and rushed, to the detriment of the quality of data and information collected and the depth of reflection that was feasible during program review. Others characterized the timeframe as extended and dragged out, which impacted the currency of the review and by extension, the program itself. Some participants’ departments 78 had access to funding that was allocated to release time for one or more faculty members which participants described as a benefit to the project. However, the release time available was not always sufficient or logistically feasible, given department structure and size. 4.2.3.1 Timeframe. One of the key tensions explored by participants involved the timeframe allocated to the project. Every participant discussed the timeframe as impactful to their experience of completing a review. Some participants described that the time allocated made for a rushed and stressful project, and others described that the timeframe was too stretched out to be meaningful and support currency of programming. In some cases, the timeframe, was conveyed as taking precedence over the quality of the project. Flexibility and responsiveness of the timeframe was discussed extensively and are presented here and in the Power and Relational Dynamics section. Participants had mixed opinions about the timeframe allotted for program review. As Px007 explained, in their continuing education context, the review took a long time, and while this contributed to making the project manageable, a faster-paced review would have been better suited to the responsive nature of the program: At times I did feel that it, and I still do, it was… an extended process, as in, that it was quite lengthy. Which had pros and cons. The immediate advantage I can find is that it becomes a little more manageable to assimilate it into my, you know, day-to-day activities. On the other hand, … from a project delivery perspective and due to the nature of the programs and the whole life cycle of this program review, the needs of the industry. I felt that having it compressed, or more of a fast-paced process would have… not necessarily changed the quality, but certainly, it would have maintained the currency of the of the program a lot more. (Px007 1) On the other hand, the more common sentiment was that the timeframe of the program review was rushed, running up against the summer break, which impacted the quality of data that the program area was able to collect and analyze, as Seth relayed: Because we were running up against the end of June... we could only squeeze in two focus groups before it was summertime and then it would be too late. So that's another reason why it would have been good to have much more time... (Seth 7) Additionally, some participants found that given time constraints and deadlines, the program review became more procedural than reflective, as Jamie specified: 79 I think with more time there were other things that we would have done. I would have maybe wanted to dig into some issues in more detail. It felt like a lot of things we've sort of racing to get all the boxes ticked, and so some of the things I would like to dive into a bit more. But yes, we did accomplish… if the task that we were supposed to do if the task we're supposed to do is produce a self study and then have an external review and then produce the final report, we did those things, and an action plan, ((tongue click)) managed all of that, right. (Focus 4) While in some cases, the rushed and bounded nature of the project timeframe was inconvenient and impacted the depth of reflection, in other instances it was stressful and painful. For instance, Seth recounted a situation where, faced with a family crisis, they asked for an extension to the timeline. ... and I asked, like I said, can we push back the deadline for the self study to the new year because this is just too much and... and the Teaching and Learning Centre was like “No. We need to stick on these timelines because we need the external reviewers to come in by this time, and we need to get everything done by the time that the call out for curriculum proposals are due, and in order for all of those things to happen, you need to get your self-study done by the by like end of October at the latest”. (Seth 3) They went on to describe how they were personally impacted: I said to the dean after that, like, I feel so disrespected, and I said, I feel like City College just chewed me up and spit me out, and she was so apologetic. She was like, “there's no reason”... So after the fact there was recognition that that wasn’t ok, but like yeah. I still... and, just for a deadline of… the curriculum proposals… I would rather have a whole other year… and you know, and we'll do, submit a curriculum proposal the next year (Focus 16) At City College, at the time that these participants went through program review, the process was contained within the fiscal year, which had some inherent challenges. Kira discussed the problem of fitting within the fiscal year: Sometimes the timing is inconvenient, like when you're going to do your external visit, or when something has to be done by... I think we're trying to fit it into a fiscal and that's not always the best fit. I think, sometimes I think when we would rather dig 80 into something a little bit deeper and maybe it takes us an extra month to do the report, that might have been preferable, but we felt like we had to finish by a certain time. (Focus 15) Moreover, the tension to fit within institutional timeframes extends into the implementation phase following program review as well, as Shane detailed, discussing the curriculum development that followed their review: Shane: And that's the heavy lifting too... nobody can help, help me anymore, now it's all me now... writing the curriculum. Researcher: That's really interesting. So that evaluation part for you was really well resourced, but now you have to do all the follow up all on your own. Shane: Of course, I still have … I mean, the Teaching and Learning Centre to guide us. But yeah, that is only, you know, really guiding now. Okay then, the real meat there, you know, it's like, it's us now, you know… and then, you know we looked at the timeline yesterday, if we target September [launch of revised program], that means we have to finish all this by December, submit it in January and then I told her, you know, wow that…. Researcher: Yeah, I know. Shane: They… that would be like a dream... the ideal schedule, you know... but we will try to hit that target, but I mean... We all know that is the kind of the... the ideal situation that, you know, target that we can meet. (Shane 10) As some participants raised, there may not be a neat one-size-fits-all timeframe for all program reviews. Considerations such as the size of program, number of faculty members, program schedule, and personnel resources can all impact the timeline. Jamie and Shane discussed these differing needs here in the context of fitting within the fiscal year: Jamie:... just give it more time, like just give it six more months. Or have a pre-phase in which you have time to gather more information and engage people. Maybe make it two fiscal years if you need it to be attached to the fiscal, but there's no obvious reason that it needs to start and end on… the timeline that it does. Especially when programs are supposed to be renewed every five years.... I would rather take longer and do a better job of it, but I don't think it necessarily needs two full years but if the fiscal thing is a real block, then just do two. 81 Shane: I think that timeline should be related to the department size. For us, very small, it doesn't matter... It's fast. But for bigger department, they do, right? It shouldn't be, like, one formula for everybody, right?... for us one year, it's OK. It's only little, right? Only two, three people. Yeah, but if you get like, close to 100 instructor, I mean, you know, you have to get all the information and stuff you know, should be all the time. (Focus 33) In exploring the timeframe allocated for program review, some participants found it too broad and others too narrow, and awkwardly structured within the fiscal year. In some cases, the lack of flexibility of the timeframe was of detriment to the quality of the review, and even the wellness of the department leader. Participants raised the question that perhaps there is no one-size-fits-all approach to program review, for programs of varying sizes and with multiple programmatic differences. Who works on the program review and whether they have dedicated release time impacts and is impacted by the timeframe and is discussed in the following section. 4.2.3.2 Release Time. Participants discussed secondment of faculty members to work on the program review as beneficial but complex. Some participants’ departments were not structured in a way that release time was possible, and others found that the budget allocated was not sufficient for releasing a faculty member for any significant length of time. Other departments found that, although there was available budget, there was no one to release, based on program size and structure. Participants discussed the benefit of having a faculty member released fully or partially to support review. As Jamie explained, having the right person seconded contributed to the success of the program review: I think a lot of our success... was because... we had a faculty member who was released, and she was so amazing... she phoned every single person in the department, in all three departments, I think, to chat with them about their thoughts on the review, and she was really good about keeping tabs on everyone, and, you know, “just a heads up that I need this section of the report in 10 days” and then “okay, I need it in three days”, and “how is it?”…. If we had had someone else who was not [that particular person], it would have been much worse. So I think we were lucky to have that particular person. (Focus 6) 82 However, even with release time within a particular department, department leaders themselves may not have any release time, dedicated specifically to the program review and as a result their own work may be stretched within the process: Kira: ... because there was not really extra time set aside for department leaders for this... the managing of the process came at a cost of other opportunities. For instance, when we're spending so much attention on our [university transfer] reviews, the [other] programs within our area go, not unnoticed, because we're still doing that work, but they're unsupported in a way, because all this bandwidth is used up. Researcher: Yeah. It's manageable, but at what expense? Kira: ... I would tag on to that to say I think that we all worked pretty hard not to neglect, but that meant doing extra work off the sides of our desks to make sure that our normal responsibilities were okay. (Focus 5) In parallel to the timeframe discussion above, the complexity of release time extends into the implementation of the action plan, which usually involves curriculum development. As Kira stated: ... another factor comes back to... the way that we look at release for department leaders, and, when we were looking at some of the action plan and what to ask for, we went, well, how can we even use this money in a meaningful way unless it's a certain size where we can buy out a faculty member’s time for a period of... a course or two. (Focus 18) In some departments, while release time is available, the faculty members seconded still carry a course load. As Seth related, this was “really tricky for her because she's also teaching and she had like, three different teaching contracts, and then this program review thing, and it was still hard, even though she had that release time.” (Seth 4). Additionally, some departments at City College are very small, with only one or two faculty members, so even with budget for secondment, there is no one to second, as Shane narrated: That's all we have, right? Only two people, so... we don’t have another person to release... even now with the extra instructor it's just barely helping us to run smoothly inside the classroom... only a few hours a week... So as you say, we really don’t have a person to release, so, so it’s quite tough... because we’re not like other bigger departments… where they can set aside a couple persons to do this review right?... we 83 have to teach our class every day and then just try to, to get it done. (Focus 3) All of the participants brought up time and workload in their reflections on their experience of program review, and these stories are woven throughout all of the themes discussed. In this section, stories that focus particularly on the timeframe of the project, and the benefits and complexities of faculty release time are highlighted. Some found the timeframe too short, others found it too long, and ultimately participants suggested that perhaps there is no perfect timeframe to meet the needs of all programs in the review process. In terms of faculty release time, those departments for which it was feasible expressed appreciation at being able to second a person to work on the review, but this was not without complications, and did not solve all of the workload issues, for either the seconded faculty member or the department leaders. These discussions raised questions of power and relation dynamics for departments, department leaders, and program coordinators, particularly in terms of agency for participants of these processes, and for faculty commitment and consensus. 4.2.4 Power and Relational Dynamics All themes discussed above and many of the stories shared illustrated the nuanced power and relational dynamics across levels of institution, program, and personnel. Some of the dynamics involved consensus and commitment of faculty members or departments. Participants discussed their dedication to and challenges in fully engaging department members, and the balancing act between consensus-building and creating action plans with adequate specificity to be realizable. A final discussion in this section, wrapping up the themes that emerged, relates to agency of the department leaders, and faculty members, particularly as it pertains to flexibility around timelines, evaluation criteria and methods, and to recommendations or action items arising out of program review that are outside of the locus of control of individual departments or department leaders and program coordinators. Many of the stories presented throughout this section involved agency, and the discussion below summarizes some of the main points discussed through the chapter. 4.2.4.1 Faculty Consensus and Commitment. Meaningful departmental and faculty engagement in the process emerged as a major theme throughout the interviews and focus groups, as discussed above in the Purpose and Impact section. Every participant expressed a desire to hear and incorporate feedback from faculty members and instructors, and most 84 spoke of challenges in doing so. Participants described the logistical challenges around engaging faculty members, including competing priorities and time constraints. During the action planning phase, there was a particular balancing act played out in some departments, where the recommendations and action items presented were purposefully vague to cultivate consensus, which resulted in expected or realized challenges in implementing the suggested changes. Jamie presented tensions with engagement from their department, drawing connections between the time pressures faced with the team working on the program review and the faculty members amid their busy terms: We really struggled with getting faculty involved, and it is one of those things in a very big department, you can always feel like someone else will do it, so you don't need to look at it, or get involved, and so I don't know, I feel like we could have used some help, like maybe we needed pizza. ((laughter)) Just… we didn't. We really, I think failed at that part of it. But I don't, I don't know… what we could have done differently and maybe that's the thing where if we had more time, we could have engaged faculty but it was sort of like, “okay, we finished the report and we have a week, and... do you have anything you want to say? No, you're busy marking midterms? Okay. I guess you like it.” So let's move on. (Focus 29) Seth described similar challenges engaging with faculty, with slightly different logistical and timing considerations, given faculty members’ competing pressures: And then in terms of faculty, … childcare was actually a big theme because all of our teachers teach at… the same hours, which meant that we had to do program review meetings outside of those classroom hours, and most of our faculty who were involved have children and could normally be there to like, pick up their children from school. But on [meeting] days they couldn't do that, and I felt awful, like, putting the parents, and, I’m not a parent, so I just felt terrible… asking the faculty. It wasn't just… the additional work, but also the way that impacts their families or caregiving responsibilities that they have with their spouses, etcetera, and then I think there's levels of... interest in participating in the process among faculty that sort of varied. I think a couple of faculty members just thought… this is the thing that institution expects us to do. We don't actually care about it, so we're not actually 85 going to engage with a lot or try very hard. So, that kind of became a barrier as well. (Focus 9) Another tension explored by participants was the balance in creating recommendations that were specific enough to be manageable and meaningful, and broad enough to be accepted by the department members. As Seth outlined, “some of our action plan items were so general that, like, we were able to get consensus about something that was a super vague thing. So then when it comes to actually implementing it, it's like... there's not realy a lot to go on.” (Focus 12). Kira and Jamie also discussed this tension and the delicate balance of finding agreement amongst faculty who were somewhat indifferent to the process with creating meaningful recommendations with enough detail to implement: Kira: Sometimes we got consensus on the steering committee that should probably not be construed to mean consensus among all faculty… they were consulted, they were given lots of time to look at our report and so forth, and then the little bits of feedback we got back I think were like “the self study makes sense”. But other things in our action plan like that, we want to have an online learning strategy that is going to not be a straightforward implementation because it will take a lot of work to get consensus. Researcher: … and you know at this moment, looking back at that, that you didn't have consensus amongst the whole faculty on those action items. Jamie: I mean, or most, sort of, did not care. I mean, we put it out there many times and no one said “I hate this action item don't do it.” Just, it was sort of the resounding “meh” from most people, so... Kira: But just, or, I think our experience with challenges about getting consensus on even small items, if we're looking for something like a unified learning strategy and for online and other delivery modes and some faculty are perhaps not using Moodle, we know there will be a challenge… kind of our reading into the future ((pause)) as distinct from somebody saying “no, I don't like that or I'm worried about what that means.” Jamie: I think we'll encounter the resistance more when we actually go to implement something. We just made a Moodle template... that we want everyone to use and for sure when we tell people that “here it is, we've made the template just like we said we 86 were going to. Please use it.” That's when we will encounter the resistance and not at the stage where we said, “you know, we think it be a good idea if we had universal Moodle template.” (Focus 14) Jamie summed up these tensions and their desire to have meaningful consensus and commitment from faculty members: I just wish that I knew if faculty really supported all of the various recommendations, because that would make me feel a lot better about everything that we have done. I feel like it's good quality and I can support the recommendations. I can tell you why they're there, but, at the end of the day, I don't really know what faculty think of them, if they care... and that would make it a lot more valuable and meaningful if I knew that everyone was on board. (Jamie 7) For most participants, including faculty members, the process was highlighted as important, and participants shared stories of their challenges in engaging meaningfully, and navigating consensus and commitment, which raised questions around agency for participants, considering themselves and the faculty members that they are representing. 4.2.4.2 Agency. Some of the most emotionally charged accounts that participants shared were around agency, for themselves as department leaders, and for the faculty members involved. This was primarily raised in stories involving setting timelines that worked for the department and individuals, flexibility around criteria, metrics and methods, and the ability to implement recommendations. A common sentiment was that the recommendations that were made were outside the locus of control of individual departments or faculties. Many of the stories shared, involving the themes discussed above, also involved agency, and this is woven throughout the narrative as well as being highlighted here. Adaptability and responsiveness around timelines arose as a major theme for participants, with varying degrees of gravity. As Px007 expressed, having set timelines was helpful for the structure of the project, but there was limited ability to influence those timelines: Px007: I think one thing that really helped was having timeline of fixed deliverables set out right from the beginning. I used Excel for that for example, but you can use any project management tool to achieve that. It’s basically a... what's due at what date and, and even though the dates weren’t set in stone, but at least I have some 87 milestones that I could track. So that is something that I would say... certainly helped to pace the progress of the review. Researcher: ... did you find that you were able to set those deadlines, or did you find that they were imposed on you? Px007: I would say a bit of both. Shane: I agree. ((laughter in the room)) Px007: … I had some flexibility to play around and... and set those, but then they were also department considerations… If you were to get this done, it must be done at a certain time during the year and also there were some operational factors also, because for example, if we were to have a big consultative meeting with instructors or externals, it wouldn't be feasible to do it right in the term is starting, right? That's when everybody is quite busy… either get it done before or delayed it until later. There were those operational considerations as well. (Focus 10) For some participants this was more than inconvenient, it was troublesome. As described above, for Seth, not having flexibility extended to them while navigating a family crisis and the program review, resulted in a very difficult situation. As Seth emphasized, “I was like, this is too much… So, yeah, the timelines… there needs to be more responsiveness" (Focus 16). Participants also discussed tensions around flexibility and rigidity of the templates used, and suitability to their own programs. As Jamie related reflecting back on the review, “I wish that I had known that it was not as rigid as it looked it, um... because they do have these templates and there isn't necessarily the sense that you can sort of tweak it to suit your programs” (Jamie 8). Jamie elaborated, describing the impact on the department: I think that just caused a bit of unnecessary stress, looking at it, and thinking this was a set in stone template and that we had to make this monstrous curriculum alignment matrix... And not realizing that we kind of had the power to say, well, this is our program review, we're going to do it a little bit differently. (Jamie 8) Although there was problematic rigidity in the timeline once the review began, for Seth’s department, being able to exercise some flexibility was an important aspect of their review, as they illustrated, “we had a good amount of freedom around defining sort of the parameters of the program review in terms of what questions we wanted to explore… even the timing of 88 the program review...umm, which was helpful going into it” (Seth 1). In addition to flexibility of the templates and methods, some participants described flexibility in the crafting of their action plans, and how rigid those documents were considered. Seth conveyed this, invoking the concept of living tree documents: Something that helped make it more manageable for me was... having permission from the dean to think of the documents as sort of living documents as opposed to pieces that kind of had to be finished, and then they would stay like that forever…So, it just kind of gave us more space to breathe to be able to be like, hey, this is what we can kind of work with for now, and we might change it separate from this process, and that kind of took some of the pressure out of it… I've heard people refer to that as like a living tree document idea. I really like that a lot the idea that documents can evolve over time and don't have to be fixed. (Focus 11) Similarly, Px007 narrated their department’s approach as iterative, and the benefits of including flexibility in the process: Yeah, and another thing I would definitely say, it’s something that still I'm working to put in my, my practice in a lot of things I'm doing, not just going through review, is to consider it an iterative process. So, what you may think the direction is going to, may not be the direction you end up with… you may have to make changes or revisions and when you think of it as an iterative process then it's easier to build in flexibility and also loop back. (Px007 7) Flexibility of the template was also raised in relation to equity impact assessment. As Kira explained, echoing Jamie’s sentiment above, their department discovered that the process was more flexible than it first appeared: I think if I were to give advice... I would say… if you get a template, don't take it as the hard and fast this is what you must do. Be critical about what is in it and what is not in it, because we found we really struggled with the way the template did not include any anything really about equity, diversity and inclusion, and not really much meaningful about Indigenization, and it was hard to find the formal time and support to have these big conversations that are very important when you're feeling time pressure and it's not in the template… So I think that Seth did a great job with their program review with saying, you know, we're going to use the EDI impact assessment 89 and work some other things through it, and we came to that very late in the game and did not have the same success. We did try and introduce some elements, but I wish that I had felt more permission to push back earlier. (Focus 31) Another theme around agency of the faculty members involved in program review was a sense that the needed program improvements (particularly non-curricular improvements) were outside of the locus of control of the department, which led to an undermining of the review and the thoughtful effort put into making meaningful recommendations. Px007 described this tension, stating that although they could not implement the recommendations within their department, “in some ways we kind of... it was nice... to have that in the action plan so that we can just say, okay, well, at least in theory it's there.” (Focus 13). Other participants reported this lack of control in being able to implement recommendations as alienating. As Kira conveyed, it was uncomfortable to spend a lot of time gathering and crafting careful recommendations, and then having them functionally ignored: It sometimes was difficult to see the point of some of the work we're doing, or I thought there was a point and then it seemed like it was going nowhere or was not necessarily going to be useful… we gathered a lot of information to make a recommendation that was somewhat out of the control of our department... So if it takes you a lot of work to put that recommendation together, then it feels like there was not meaning there… that were more than just “faculty can put more time into it” that felt like [that was not meaningful or valuable] and that's where it was a little bit alienating, especially toward the end. (Focus 22) While not the most common themes that emerged through participants’ stories, those that involved power dynamics and the relational dynamics involved in program review, were amongst the most emphatic. Participants described frustration about lack of agency, and appreciation where they were able to exert influence. Some participants had difficult times throughout the program review and others struggled with not being able to engage faculty as thoroughly as they wished. Overall, participants emphasized their own commitment to the program review process and to navigating the dynamics therein, relating directly to their agency in the process. 90 4.3 Conclusion This chapter has delved into the stories of the participants, exploring the four major themes that emerged through the participant interviews and discussions: purpose and impact, project structure and process, time and workload, and power and relational dynamics. While each of the themes are presented and explored separately, they are interconnected. As Seth revealed, It was like juggling multiple people. Like, I've got the faculty over here who I really want to feel engaged in the process, even though they're feeling suspicious about what the whole point of it is and then I have sort of the admin who I really want to be like, look how great our program is! And so that's like another... another job. And then I have IR who I feel like doesn't get us at all and I'm trying to like work with them to help them understand us so that they can get data that will actually be useful to us. And then yeah, I guess those are kind of like the three... And then we have the external review team who I also am like I really want to give a good impression of our department and City College, and I want them to have a positive experience because they're, like, somebody came in all the way from [another city in the region] for this, and like you know, as a host I want to do a good job… I want our department, to them, to come away being like, wow, they're doing a great job and write a good report. So it feels like there's a lot of kind of, um, uh, different groups that I feel like I'm having to manage in in different ways for different purposes? And that takes away from like the actual work of introspection of our department and actually, like, what is it really that we could be doing differently or better? Or, um, you know the things that are actually probably the main point of a program review. (Seth 13) The following chapter draws connections between the themes that arose from the participants at City College and the themes identified through the literature and theory. These connections are followed by recommendations for institutions to consider in planning, designing, and implementing program reviews with attention to increasing faculty ownership of the process. 91 Chapter 5 Discussion: Interpretations and Recommendations 5.1 Introduction The purpose of this study was to explore faculty members’ experiences of quality assurance processes, particularly formalized program review, a policy-driven review process at City College. Illuminating the experiences, the successes, and struggles of department leaders and program coordinators undergoing this process may provide insights for faculty members, quality assurance and teaching and learning practitioners, and administrators involved in strategic planning initiatives at post-secondary institutions. This could be particularly relevant at community colleges and vocational institutions, where the day-to-day instructional and administrative activities of department leaders going through these processes can be quite different than the evidence gathering, analyzing and report writing activities required of program review, and where time and budgets may not allow ample reflection time, significant qualitative data collection, or meaningful consensus-building. A collective instrumental case study was the chosen methodology, with data collection involving interviews and a focus group with department leaders and program coordinators that had led their departments through program review within the past five years. Focusing on how meaningful and how manageable participants found the process, the research explores the question “what is the experience of department leaders and program coordinators leading the program review process?” Data were analyzed using a narrative approach to highlight the participants’ stories and the interconnected social, cultural, and political/policy factors as well as the subjective and emotional elements that these factors influence. Coding was emergent, using key moments and sound bites to emphasize participants’ voices in the stories they told, employing a mix of direct interpretation and categorical aggregation. Four themes and 16 sub-themes were derived from key moments and sound bites pulled from interview and focus group transcripts. The four overarching themes were purpose and impact, project structure and processes, time and workload, and power and relational dynamics. This chapter considers the interpretations and meanings underlying the data and themes identified, framed along key issues that arose through the analysis. Twenty-one specific recommendations have been derived and are presented along with four overarching suggestions for implementation. The chapter concludes with a discussion around limitations 92 of the study and suggestions for further studies. 5.2 Cross-Case Interpretation of the Findings and Issues In framing my interpretation of the findings, I revisited my long-held epistemological stance described earlier, that systems derived from first principles always contain undecidable factors, or, in other words, any formal system can be either consistent or complete, but it cannot be both. Similarly, I considered again the influence of critical reflection, and my overarching focus on agency, power dynamics, and uncovering underlying assumptions that may be holding us back. I also returned to Stake (1995), who introduced issues as conceptual structures to frame research questions, emphasizing both contextualization and complexity, and stated that “issues are not simple and clean, but intricately wired to political, social, historical, and especially personal contexts” (p. 17). Given these two influences, I examined the findings along the four overarching issues within the complex system of five cases; department leaders going through program review: • Is program review meaningful? • Is program review manageable? • Can a program review process be both meaningful and manageable? • Are the meaningfulness and manageability of program review in conflict with one another? To do so, I dove back into the themes explored in the previous chapter, cross-analysing the five cases along one or more of the issues stated above, and derived recommendations and implementation suggestions directed towards those issues in context. 5.2.1 Purpose and Impact Many of the participant discussions involved the purpose and impact of program review. Participants described the reflective nature of program review, the relation of this process to institutional strategic planning, and the importance of meaningful engagement with faculty, students, graduates, and community members. Participants discussed the potential, through program review, to initiate and influence program changes and improvements, advocate for resources to make those changes, and assess program impact. Considering program review as an accountability mechanism and a tool to assess feasibility led to discussions around equity impact analysis in the review, including considerations and contribution to Indigenization and decolonization. 93 5.2.1.1 Is Program Review Meaningful? As explored through their stories, participants were able to use the program review as an opportunity to reflect critically and analytically about their programs. Taking a step back to reflect on their programs allowed participants to develop a more holistic view of their programs, to validate held assumptions, consider operational aspects, and to identify strengths and gaps, including issues of equity and access. In some instances, participants were able to reflect on or create historical records of their programs. Although constraints were described, participants were able (experienced agency) to question or validate assumptions through review and to construct new meanings. Reviewing the literature on program review, there is agreement that program review should be a reflective activity (Hoare et al., 2022; Senter et al., 2020; Wagenaar, 2015). As Wagenaar argued, “program review should be constructed as a process of continual reflection and improvement, part of the institutional culture” (p. 13). Through critical and analytic reflection, program review can be a meaningful exercise. Participants recognized the potential of program review to influence institutional strategic plans but did not necessarily experience the process as such. Participants enjoyed creating recommendations and action plans but did not universally experience agency in the planning process. Considering the literature about program review, there exists agreement that program review has the potential to be meaningfully connected with strategic planning (Barak, 2007; Coombs, 2022; Hoare et al., 2022; Vettori, 2018), and as Hoare et. al. (2022) pointed out, “there persists discontent with its capacity to impact institutional planning” (p. 402). Program review can be meaningful, but disconnection between this process and institutional planning and initiatives can undermine the meaning for faculty members. Participants additionally found program review to be a tool with which to bring forward issues that had been known for some time, and to advocate for resources to address those issues. Challenges in advocating for resources included under-supported transitions from the review process to the implementation of recommended changes, and recommendations that did not fit neatly within curriculum development funding frameworks. Participants described being able to exercise agency to create program improvements while observing a disconnect between the stated purpose and the actual influence of program review, resulting in a sense of alienation. Reviewing the literature, we find that while there is widespread acknowledgement that program review processes should be linked with resource 94 allocation (Davison et al., 2009; Kleniewski, 2003), endorsement from administrators is critical if this is to be the case (Senter et al., 2020). On the other hand, focusing on advocacy for resources can detract from the reflective and improvement-focused nature or review. As Conrad and Wilson (1985) state, “it is especially difficult to pursue both program improvement and resource reallocation at the same time, and an institution's interests are served best if reviews focused on program improvement are conducted separately from those concerned with reallocating resources" (p. 2). Program review can be a tool for advocating for resources and may be more meaningful if focused primarily on program improvements. Some participants described program review as a mechanism through which programs are assessed for impact and feasibility. Participants acknowledged the connections between assessing financial sustainability and program quality in the reviews, and questioned whether an accountability focus and a reflective focus were compatible. Some participants questioned whether the centrally impacted or interested parties of accountability within review were students, faculty, administrators, or policy makers. The power dynamics at play within program improvement and accountability of program impact and feasibility was recognized and acknowledged by participants. Reviewing the literature, we see a growing trend towards quality assurance practices focused on feasibility in the United States and Ontario (Creamer & Janosik, 1999; McGowan, 2019), which may also be the case in BC. While this accountability is often to funders, government ministry or provincial or state boards and tied to performance-based funding (Barak, 2007; Lawrence & Rezai-Rashdi, 2022), as Wagenaar (2015) states, review processes “should start with student needs; faculty members should see program review as part of their academic responsibility to their students” (p. 13). Davison and colleagues (2009) sum this conflict up neatly: “Neither faculty nor staff is best motivated by statutory regulations and threats of external accountability, but rather by the desire to see students succeed” (p. 9). Program review can be meaningful, but it may not be when perceived as an accountability measure, particularly if faculty members do not see students as beneficiaries of the accountability. Some participants found or created meaning through the opportunity to include conversations around equity impact as well as Indigenization and decolonization in program review, and some commented on the absence of the same. Participants described the opportunity to reflect on equity and access, an opportunity that they created themselves in 95 some cases. The importance of and lack of availability of disaggregated data was raised, as was a notable absence of questions around Indigenization or decolonization in the program review templates and resources. Participants identified elements of the program review that were upholding colonial and oppressive structures and exercised their agency to challenge these. Reviewing the literature around equity, Indigenization, and decolonization, there is widespread agreement that without explicit examination, program review and other quality assurance practices may remain entrenched within structures that are reflections of colonial and neoliberal systems (Anderson & Smylie, 2009; Hoare et al., 2022; LaFrance & Nichols, 2008; Senter et al., 2020; Vettori, 2018). As Rockey and colleagues (2021) describe in the context of anti-racist practices, “adopting an equity-conscious lens... can move institutions toward the implementation of anti-racist change, with the goal of examining current systems to understand how unequal power structures affect racially minoritized people”(p. 2). Program review can be meaningful when considerations around equity, Indigenization, and decolonization are included in the process. 5.2.1.3 Can Program Review be Both Meaningful and Manageable? Considering the purpose and impact of program review, it became clear in some instances that the process can be both meaningful and manageable, but only when departments have sufficient time, resources, and autonomy to balance both the reflective and the forward-looking aspects of review. This dynamic was particularly evident when considering meaningful engagement and implementing program improvements. Engaging meaningfully with impacted parties required more time and resources than were available in some cases, and meaning was sacrificed for manageability. Similarly, in some cases, logistics around creation of action plans hampered the potential of manageable and implementable plans that would result in meaningful program improvements. Engagement with faculty, students, graduates, and external and internal community members was highlighted as centrally important to the participants’ experiences of program review. Participants emphasized a commitment to engaging meaningfully with students and graduates, their motivations for doing so, and the logistical challenges that they faced. Overall, participants described being able to exercise agency to practice meaningful engagement. Considering the literature, we find, in particular, agreement that program review should prioritize engagement with faculty members (Davison et al., 2009; Hoare et al., 2022; 96 Mussawy & Rossman, 2018). As Mussawy and Rossman (2018) state, the success of such processes is contingent upon “engagement of key stakeholders including faculty, staff, and administrators” (p. 9). Groen (2017) defends a participatory approach to quality assurance and highlights the importance of engaging widely, describing that “a concerted effort to both situate quality assurance processes within the context of academic programs and enable a supported participatory approach will greatly contribute to more relevant assurance processes, and by consequence, quality higher education” (p. 96). Program review can be most meaningful when time and resources support wide and fulsome engagement. All of the participants described solving problems and proposing and implementing program changes and improvements as one of the most meaningful parts of program review, and in some cases, additionally as one of the most fulfilling parts of educational leadership. Each participant also described challenges in creating recommendations and action plans that could result in meaningful program improvements, from a rushed action planning phase, to balancing specificity of recommendations with faculty consensus, to working with internal departments and external panelists who did not understand and therefor did not reflect the nuances of the programs within their recommendations. Scholars have focused on the importance of connecting program review with program improvements (Creamer & Janosik, 1999; Davis et al., 2020; Groen, 2017; Senter et al., 2020). As Davis et al. (2020) explain, the strongest program reviews emphasize “reflection, conversation, and feedback in order to facilitate a strong vision for the future through an honest assessment of program strengths, weaknesses, and opportunities for improvement” (p. 4). Similarly, Davison (2009) found “an effective process is likely to make its practitioners proud and humble in turn, as they discover the things they do well and the areas that can be improved” (p. 21). Program review is a meaningful process but can result in action plans that are neither meaningful nor manageable. Findings included substantial discussion regarding the purpose of program review, and its potential, perceived, and real impact. Considering purpose and impact, factors that affect the meaningfulness of the process include the reflexivity, connections between program review action plans and institutional strategic plans, allocation of time and resources, guidance and facilitation, student focus, inclusion of equity and access factors, and discussion and reflection about Indigenization and decolonization. The next section focuses 97 on structure and process of the project of review, examining factors impacting both meaningfulness and manageability. 5.2.2 Project Structure and Process The structure and process of program review as a project was a central theme to many of the participants’ stories. Program review is a large project, and in some cases the biggest project that participants had led as educational leaders, and training, support, and coordination were identified as necessary to complete the project, as were resources and supports such as facilitators and resource documents such as templates and manuals. The availability, collection, and analysis of data that accurately reflected program students, outcomes, and needs was emphasized, including the lack of access to disaggregated data. How the project was managed and overseen was highlighted as impactful to faculty experience, leading to a discussion of the reputation of program review, which, in many cases was quite different than faculty members’ actual experiences of the review. 5.2.2.1 Can Program Review be Both Meaningful and Manageable? Program review is a substantially large and complex project, and participants described training, support, and project coordination as impactful aspects of the completion and success of the review. By in large, participants found the structures of training, support, and coordination supported their review. In the literature reviewed, there were ubiquitous recommendations for personnel leading the process to be sufficiently resourced (Davis et al., 2020; Davison et al., 2009; Germaine & Spencer, 2016; Hoare et al., 2022; McGowan, 2019; Senter et al., 2020). Hoare and colleagues pointed out that in most cases, institutions have resources available to support departments in program review, although there may be difficulty accessing them, and proposed a cohort model led by both learning experts and quality assurance practitioners that “incorporate professional development opportunities for academics” (p. 6) to increase “understanding and appreciation for continuous quality improvement for educational programming" (p. 6). Program review is both more meaningful and more manageable when there is adequate support, coordination, and training incorporated. At City College, there is no centralized quality assurance office, and the Teaching and Learning Centre provides coordination and facilitation for departments going through review. This work is undertaken by instructional associates. City College also has a standing educational quality committee that is responsible for oversight of program quality assurance. 98 Leadership and oversight of the process impacted many of the participants’ experiences with the process. The power dynamics involved with leadership and oversight were complex for participants. Some experienced the facilitation and guidance as essential and nonproblematic, and others experienced it as rigid and troublesome. In the literature, some scholars explored leadership of quality assurance processes and emphasized the importance of departments and faculties taking ownership over accountability processes and outcomes (Mussawy & Rossman, 2018; Senter et al., 2020). Some discussed the implications of jurisdiction-wide versus institutional control over review processes. Barak (2007) suggested “faculty ownership in the reviews would be enhanced if the locus of the reviews was at the institutional level” (p. 15), where Skolnik (1989) called into question “the appropriateness of a total system-wide application of the connoisseurship model; that is, having a single group of connoisseurs make quality judgments for all programs” (p. 639). The leadership and oversight structure of program review processes impacts both manageability and meaningfulness. 5.2.2.2 Are the Manageability and Meaningfulness of Program Review in Conflict with One Another? There were some factors explored in which the manageability of program review and its meaningfulness were in direct conflict with each other. The resources available tended to be experienced as sparse at certain points in the review cycle, and for the review to remain manageable, decisions were made that impacted the potential for meaningful and reflective reviews. In particular, the data available and which departments collected during the review was impacted by the time and human resources available and participants faced challenges in collecting sufficient and meaningful data and information to draw the meaningful insights from the process. The reputation of program review that preceded participants’ experiences of the process highlighted these tensions, and participants described finding the review both more manageable and more meaningful than expected, if the focus on program improvements was maintained. To be successful, program review requires sufficient resources and supports such as templates, access to data and information, budget, and time. In the study, every participant commented on the resources available, and most found that the resources available were both helpful and necessary, and that the structures in place supported the program review, at least to a certain extent. In the research reviewed, there was agreement that quality assurance 99 processes need to be well-resourced in order to be impactful and effective (Creamer & Janosik, 1999; Davison et al., 2009; McGowan, 2019), but little mention of what kinds of resources are required. Davis (2020) explicitly acknowledged that the “training, support, and resources [are] identified as important aspects underpinning the [program review] process” (p. 8), and Senter (2020) recommended that departments reduce the time faculty devote to the review by “borrowing resources and expertise from others” (p. 13), including free sectorwide resources. Program review can only be meaningful if it is well-resourced to the point of being manageable. Data was an important point of discussion for the participants, who spoke about the importance of meaningful and descriptive sources, collection methods, and analysis of data and information. Participants acknowledged that given the timeframe and the existing expertise in the institution, the data available did not tell the full stories of their programs. Additionally, participants identified that data and how it was analyzed and presented could reinforce and validate assumptions to the benefit of programs and could also influence the uncovering of hidden assumptions that reinforce hegemony. The literature reviewed supported the broad importance and influence of data in program review. As McGowan (2019) explained, program review’s focus on outcome assessment, and the associated “emphasis on data collection for decision-making purposes, has turned the tide from a best practice to an expected practice" (p. 61). Davison et al. (2009) recommended that program review entail a “candid self-evaluation supported by evidence, including both qualitative and quantitative data” (p. 8). Considering a decolonial, anti-racist, and equity-focused lens, Lafrance and Nichols (2008) emphasized the responsibility of valuing “subjective experience as well as objective data” (p. 27). Rockey et al. (2021) pointed out that program review can drive change, and that the templates and structure of reviews can “facilitate the examination of disaggregated data, identification of racial equity gaps, and commitment to systemic change across programs, academic disciplines, and departments within a single institutional context" (p. 1). In the context local to City College, the British Columbia’s Office of the Human Rights Commissioner (2020) released a report entitled Disaggregated Demographic Data Collection In British Columbia which suggested that: “The Grandmother Perspective” supports that the collection of race-based, Indigenous and other disaggregated data can further social equality, and that care must be taken to avoid the reinforcement of 100 discrimination and existing biases. Only with sufficient resources and expertise for robust data collection, analysis and interpretation, can program review be both meaningful and manageable. 5.2.2.3 Is Program Review Manageable? The reputation of the program review process as compared to the experiences of participants highlighted a central tension for participants. Some described that they found the process more manageable than expected when they were able to take ownership of the process and exercise some flexibility around various components. The research related to faculty perception of program review suggested that the process was met with reactions ranging from skepticism and cynicism to anxiety and active withdrawal (Cardoso et al., 2018; Conrad & Wilson, 1985; De Valenzuela et al., 2005; Kleniewski, 2003; Senter et al., 2020). McGowan (2019) noted that review processes may not meet institutional needs, based partly on “perceptions of faculty participants of authoritarian and non-collegial processes” (p. 55). Germaine and Spencer (2016) stated that “through thoughtful consideration of the perspectives of faculty and administrators who are embarking on accreditation, the process has the potential to be a series of inspirational faculty development experiences rather than a begrudged necessity" (p. 92). Although they wrote about accreditation rather than internal review processes, these lessons may be applied more broadly to quality assurance practices. Program review can be manageable, and if consideration is given to the reputation and faculty perception, it can also be meaningful. There was significant focus in both the participants’ stories and the literature about the structure and process of program review, influencing both the manageability and meaningfulness of the process. When well-resourced with time, budget, templates, and guidelines, and when training, support, and coordination align with leadership and oversight, program review can be both manageable and meaningful. A focus on equitable and inclusive data collection methods and analysis, while adding to the time and effort of a review, can positively impact meaningfulness. The process can be manageable and, as a result, meaningful, if explicit and thoughtful consideration is placed on how the process is received by faculty members. One of the most impactful aspects of program review is time and workload, which is illustrated in the following section. 5.2.3 Time and Workload Program review is a time-limited and resource-intensive process, and findings 101 indicated that time required of faculty members, as well as the workload and how that was distributed impacted faculty members’ experiences. Participants discussed the timeframe of the project, whether they found it rushed or prolonged. Faculty release time was raised as a separate but interconnected issue, and participants discussed both the benefits and complications in secondment, and the challenges accessing release time. 5.2.3.1 Are the Meaningfulness and Manageability of Program Review in Conflict with One Another? The timeframe in which the program review takes place was a common thread amongst many of the participants’ stories and one which highlighted differing opinions. Participants reflected on the time allotted to the process and questioned whether a uniform timeframe was suitable for all programs. It could be the case that the timeframe should be responsive to the particular needs of programs and their personnel. There is substantial discussion around time in the literature reviewed, and widespread agreement that the quality assurance processes can be particularly time consuming in balance with the benefit they produce (Creamer & Janosik, 1999; Davison et al., 2009; Senter et al., 2020). Hoare et al. (2022) pointed out that Western and colonial paradigms of evaluation emphasize accountability over interdependence and cooperation. They referred to Wehipeihana (2019) who suggested a paradigm shift to “evaluation as inherently relational” (p. 378). Additionally, as Coombs (2022) highlighted, program reviews are “conducted on a five-to seven-year cycle, whereas strategic planning efforts typically encompass a longer time frame and address broader goals beyond academic programs” (para. 8). As Vettori (2023) articulated, “internal and external quality assurance mechanisms are not only binding time but regulating and governing it, imposing temporal norms regarding tempo, rhythm, time-spans, time-scales and time ownership on higher education institutions and the people working and learning there” (p. 10). For program review to be meaningful, the timeframes must be manageable and aligned with institutional priorities. 5.2.3.2 Can Program Review be Both Manageable and Meaningful? Participants discussed faculty release time for program review work, which was not universally available at City College. For those departments that had release time for the project, it was described as unequivocally beneficial, even when challenging for those individuals. Overall, participants’ experiences suggested a paradigm that emphasized efficiency and productivity despite limited resources. It is supported in the literature that time needs to be provided for 102 faculty members to engage in program review (Davis et al., 2020; Germaine & Spencer, 2016; Senter et al., 2020). Senter et al., in their national survey-based study of Sociology Department Chairs found that “slightly more than 10 percent indicate that program review lead(s) received some release time” (p. 7) and recommended that faculty members take steps to reduce the time demands, such as relying on research assistants, and sharing expertise and resources. Germaine and Spencer (2016), studying faculty perceptions of accreditation at a large research university, found that 66% of their participants indicated that time and workload pressures were the main barriers to success, and recommend that that leadership “makes accommodation for the extra workload undertaken by faculty" (p. 91). Program review can be meaningful when faculty time allocated to the process is carefully considered and can be manageable if the time and workload required are accommodated. The time and workload involved in program review are significant, and impact faculty members’ experiences considerably. Program review can be both meaningful and manageable when the timeframes are realistic and aligned well with the academic year as well as planning and fiscal timeframes, when there are accommodations made for the time required, and when staff and other resources are utilized such that faculty member expertise and strengths are considered in their allocation. Time and workload are impacted by power dynamics and the relationships involved in program review, as considered in the following section. 5.2.4 Power and Relational Dynamics An overarching theme, woven into many of the participant discussions were the power and relational dynamics involved in the program review. An exploration of participants’ experience of agency throughout the process, included flexibility, adaptability and responsiveness in the timelines, templates, methods and criteria, and a common experience of recommendations for needed changes and improvements being outside of the locus of control of individual departments, paired with a lack of response from the institution. Consensus and commitment of faculty members was also discussed, illustrated through stories of how participants balanced commitment and consensus with creating viable, actionable plans. 5.2.4.1 Is Program Review Meaningful? Engagement with faculty was an important theme for participants and was described as a challenge in most cases. Some reflected on 103 faculty members’ commitment to the process, and others described the efforts they took to achieve consensus. Participants sought to balance their own agency in the process with faculty members’ commitment, and this was complex in insofar as the degree of agency that faculty members themselves may have experienced. Some of the literature reviewed explored faculty commitment to quality assurance processes. Senter and colleagues (2020) pointed out that program review is “most successful when faculty are committed to the process” (p. 5) and recommended that department leaders control review processes to ensure that they can be shaped to the specific needs of the departments. Kleniewski (2003) suggested that difficult decisions made through the course of program review could be made “in a less politicized and contentious way when they are the result of a consensus forged with departmental faculty through the program review process” (para. 28). Germaine and Spencer (2016), examining faculty commitments to accreditation, explored faculty resistance to change, and suggested that faculty may resist recommendations with a high degree of specificity, as “faculty may fear the long-term sustainability of specific changes, viewing them as a ‘flavor of the month’” (Lueddeke, 1999, cited in Germaine & Spencer, 2016, p. 72). Program review can be meaningful when considered a collaborative exercise, that takes specific departmental needs into account. 5.2.4.2 Can Program Review be both Meaningful and Manageable? Agency was a recurring theme in the participants’ stories. Most of the themes that emerged may well have also been explored as they related to agency of the faculty involved in the narratives. Each of the participants described some degree of desire to exercise control in the process, making it more manageable, more meaningful, or both. The power dynamics at play given the structures that frame program review were invariably complex and related to both agency and the underlying assumptions about quality, accountability, and trust, whether explicit or unspoken. Scholars highlighted a lack of agency for faculty in determining assessment methods and criteria, and frustration in perceiving quality assurance processes as an external imposition (Cardoso et al., 2018; McGowan, 2019; Skolnik, 1989). Writing about accreditation processes, Cardoso and colleagues (2018) pointed out that faculty members can “show an uncritical position towards their participation... accepting the roles given to them” (p. 78), which may reflect a deficit in ownership over the process. Wagenaar (2015) suggested that decisions made in program review should “evolve out of careful deliberation 104 reflective of everyone’s input, which are centered on solid program goals” (p. 13). As Senter (2020) summarized, “faculty are most engaged when they trust that institutions are committed to a meaningful process, the criteria for evaluating program quality are broad and outcomes-based, and they feel ownership over the process and perceive that their efforts are worthwhile” (p. 5). Program review can be most meaningful when departments have agency in determining criteria, methods, and timelines that are manageable. Findings indicated that faculty members’ experiences are impacted by the power dynamics and the relational dynamics surrounding their work, as well as the commitment required of department leaders and program coordinators to engage meaningfully. These themes were interwoven throughout the participants’ stories. Similarly, the efforts to encourage faculty members to commit to the process as active participants, and to bring forward recommendations that demonstrated represented consensus among faculty members were highlighted. Program review can be most meaningful for faculty members when considered a collaborative effort, and when departments can assert agency in determining criteria, methods and timelines that are manageable and that take their departmental and program needs into account. The cross-case interpretations presented above drew together and explored the participants’ experiences with program review, centred around the four issues presented at the beginning. In the following section, several recommendations that arose from responses to these issues contextualized through the narrative of the participants’ stories are presented, along with some suggestions for implementation that draw on analysis of both the participants’ stories and the literature reviewed. 5.3 Recommendations The previous sections have explored faculty members’ experiences with program review through the stories of the five participants, considering the four issue questions: Is program review meaningful? Is program review manageable? Can a program review process be both meaningful and manageable? Are the meaningfulness and manageability of program review in conflict with one another? In responding to these questions within the themes and subthemes a response statement was presented, and from those, 21 specific recommendations were formed. The recommendations are presented below and as associated with the response 105 statements in Table 5, followed by four suggestions for implementation. 1. Ensure that program review is framed as a reflective activity. 2. Create explicit connections between program review and institutional planning. 3. Incorporate mid-cycle institutional reviews of action plans. 4. Consider both internal and external parties for consultations and review panels. 5. Plan for various forms of engagement, suitable to the impacted groups. 6. Distinguish between aspirational recommendations and achievable action plans. 7. Cultivate an institutional culture of continual improvement. 8. Hold students as centrally impacted parties when addressing accountability. 9. Explore inclusion of disaggregated data to assess barriers to access and success. 10. Include reflection regarding Indigenization and decolonization into resources. 11. Incorporate equity impact analysis into program reviews. 12. Conceptualize program review as an opportunity for professional development. 13. Include both quality assurance practitioners and teaching and learning practitioners in facilitating program review. 14. At the institutional level, adequately resource program review. 15. Seek opportunities to collaborate intra- and inter-institutionally. 16. Include qualitative and quantitative sources, collection methods, and analysis of data. 17. Train personnel in equitable and inclusive evaluation practices and data collection methods. 18. Encourage departmental ownership over the aspects of review processes and outcomes for which they can exert influence. 19. Consider program and departmental factors when setting timeframes and timelines for program review. 20. Make accommodations for the time and workload that program review requires from department / review leaders. 21. Utilize staff resources such as research assistants to structure program review such that faculty expertise and strengths are considered in their involvement. 106 Table 5 Recommendations by Issue and Response Statement Issue and Response Statement Recommendation Is program review meaningful? Through critical and analytic 1. Ensure that program review is framed as a reflection, program review can be a reflective activity. meaningful exercise. Program review can be meaningful, 2. Create explicit connections between program but disconnection between this process review and institutional planning. and institutional planning and 3. Incorporate mid-cycle institutional reviews of initiatives undermines the meaning for action plans. faculty members. Program review can be a tool for 1. Ensure that program review is framed as a advocating for resources and may be reflective activity. more meaningful if focused primarily 2. Create explicit connections between program on program improvements. review and institutional planning. Program review can be meaningful, 8. Hold students as centrally impacted parties but it may not be when perceived as an when addressing accountability. accountability measure, particularly if faculty members do not see students as beneficiaries of the accountability. Program review can be meaningful 9. Explore inclusion of disaggregated data to when considerations around equity, assess barriers to access and success. Indigenization, and decolonization are 10. Include reflection regarding Indigenization included in the process. and decolonization into resources. 11. Incorporate equity impact analysis into program reviews. Program review can be meaningful 5. Plan for various forms of engagement, suitable when considered a collaborative to the impacted groups. exercise, that takes specific 18. Encourage departmental ownership over the departmental needs into account. aspects of review processes and outcomes for which they can exert influence. 19. Consider program and departmental factors when setting timeframes and timelines for program review. Is program review manageable? Program review can be manageable, 15. Seek opportunities to collaborate intra- and and if consideration is given to the inter-institutionally. reputation and faculty perception, can also be meaningful. Can program review be both meaningful and manageable? Program review can be most 4. Consider both internal and external parties for meaningful when time and resources consultations and review panels. support wide and fulsome 5. Plan for various forms of engagement, suitable engagement. to the impacted groups. 107 Issue and Response Statement Recommendation Program review is a meaningful 6. Distinguish between aspirational process but can result in action plans recommendations and achievable action plans. that are neither meaningful nor 7. Cultivate an institutional culture of continual manageable. improvement. Program review is both more 12. Conceptualize program review as an meaningful and manageable when opportunity for professional development. there is adequate support, 13. Include both quality assurance practitioners coordination, and training and teaching and learning practitioners in incorporated. facilitating program review. The leadership and oversight of 13. Include both quality assurance practitioners program review processes impacts and and teaching and learning practitioners in should consider both the leading program review. manageability and meaningfulness. 18. Encourage departmental ownership over the aspects of review processes and outcomes for which they can exert influence. Program review can be most 20. Make accommodations for the time and meaningful when faculty time workload that program review requires from allocated to the process is carefully department / review leaders. considered and can be manageable if 21. Utilize staff resources such as research the time and workload required are assistants to structure program review such that accommodated. faculty expertise and strengths are considered in their involvement. Program review can be most 18. Encourage departmental ownership over the meaningful when departments have aspects of review processes and outcomes for agency in determining criteria, which they can exert influence. methods, and timelines. Are the manageability and meaningfulness of program review in conflict with one another? Program review can only be 14. At the institutional level, adequately resource meaningful if it is well-resourced the program review. point of being manageable. 15. Seek opportunities to collaborate intra- and inter-institutionally. With sufficient resources and expertise 9. Explore inclusion of disaggregated data to for robust data collection and analysis assess barriers to access and success. and interpretation, program review can 14. At the institutional level, adequately resource be both meaningful and manageable. program review. 16. Include qualitative and quantitative sources, collection methods, and analysis of data. 17. Train personnel in equitable and inclusive evaluation practices and data collection methods. For program review to be meaningful, 19. Consider program and departmental factors the timeframes must be manageable. when setting timeframes and timelines for program review. 108 5.3.1 Suggestions for Implementation To begin to address the recommendations described above, four suggestions for implementation are included here. First, quality assurance processes should be centrally coordinated at an educational quality assurance or similar office that works collaboratively with a teaching and learning centre. A process guided by both quality assurance and instructional development practitioners could ensure that the process remains reflective, has the potential to examine issues of equity and access and is meaningfully connected institutional planning and resource allocation that is focused on program improvements that centres student needs. Second, quality assurance activities need to be planned with departmental factors such as taking size and structure into account and involve some wrap-around supports. There may be no one-size-fits all budget or timeframe for all programs and departments within an institution and adaptability can enable faculty members to engage in a more fulsome manner, and to take ownership over the methods and metrics, to whatever degree is possible. Third, quality assurance practices should include considerations around equity and access, and should challenge colonial structures, mechanisms, and definitions of quality. This might involve training institutional research personnel in equity-focused and inclusive data collection, analysis and reporting methods such as disaggregated data and qualitative data collection. Fourth, quality assurance practices such as program review should be considered as collective activities and should incorporate collaboration within and among institutions. This could be achieved through internal peer reviewers (Davis et al., 2020), program review learning communities (Hoare et al., 2022) and working groups, and shared resources amongst institutions through provincial or state-wide educational quality or teaching and learning organizations. 5.4 Limitations and Future Studies There are some clearly identified limitations to the study, and each suggests further research. First, the scope was narrow, taking place at only one institution within one jurisdiction. There were no more volunteers than participants, so although maximal variation sampling was the aim, convenience sampling was the outcome. While the participants came from a variety of schools, departments, and programs, within City College, the sample was 109 not representative of the myriad of departments and programs, and thus, the meanings and implications that could be drawn are somewhat restricted. A mixed-method study reaching across jurisdictions could reach a vastly larger cross-institutional and cross-jurisdictional participant pool and could explore not only department leaders’ experiences, but also other parties involved in program review, including quality assurance practitioners, faculty members, and administrators. Second, the participants’ stories and narratives contain discussion of equity and barriers to access, and the literature reviewed contains discussion about the socio-political contexts in which quality assurance exist. However, there is no discussion of class in this study. Class analysis would add depth and meaning and could create potential to draw comparisons between academic and applied post-secondary institutions, or rural and urban institutions. Third, a critical policy analysis that includes document review or analysis could explore the policy and decision-making aspects of program review. A document analysis could include templates and institutional policies, as well as provincial policies related to program review. Since much of the existing research is policy research, a study that explores both participants’ experiences and social actors that influence policy creation and implementation could draw out rich and nuanced meaning. As Skolnik (2016) suggests, “collecting data from reviewers could be a valuable complement to document analysis, because reviewers may be able to exert such a great influence on the outcomes of quality assurance processes" (p. 367). There is much potential for further research. A longitudinal study could explore how faculty members’ experiences change over time, while a study that focuses on program review within trades and vocational training could explore how these processes impact programs that have specialized facilities needs. As Senter et al. (2020) state, “bringing more research to bear on the issue of whether accountability processes actually work to improve quality might, over time, lead to systematic change" (p. 14). 110 Chapter 6 Conclusion This study sought to explore the experiences of faculty members leading programs through quality assurance activities, particularly internal, institution-led program review. Four central themes were explored through participant voices: purpose and impact of program review, structure and processes of the review project, time and workload, and power and relational dynamics. Through analysis of the participant narrative with these thematic lenses, interrelated and contextual issues were raised around manageability and meaningfulness of program review, and the interconnectedness and tensions between the two. In responding to these issues, specific recommendations were presented and distilled into four suggestions for implementation. First, that quality assurance processes be centrally coordinated and co-led by quality assurance practitioners and instructional development practitioners. Second, that quality assurance planning include adaptability to departmental factors. Third, that quality assurance practices explicitly consider issues around equity and access and include training and resources to do so. Fourth, that collaborations within and among institutions are included within quality assurance practices. Limitations of the study were discussed, giving rise to suggestions for future studies. Program review has the potential to be critically reflective and transformative. When well-resourced and with adequate time available, departments can use these activities to improve student learning, remove barriers to access, and increase equity. Further studies are suggested, with the hopes that research conducted will both enhance understanding of quality assurance in higher education and challenge existing colonial structures and mechanisms which are entrenched within these processes, to the benefit of practitioners, faculty members and ultimately, students. 111 References Aczel, A. D. (2011). The mystery of the aleph: Mathematics, the kabbalah, and the search for infinity. Washington Square Press. Anderson, M. J., & Smylie, J. K. (2009). Health systems performance measurement systems in Canada: How well do they perform in First Nations, Inuit, and Métis contexts? Pimatisiwin, 7(1), 99–115. Asher-Schapiro, A. (2020, November 13). Skin in the game: Wall Street’s answer to the student-debt crisis. Harper’s Magazine, December 2020. https://harpers.org/archive/2020/12/skin-in-the-game-wall-street-student-debt-crisis/ Baker, D. N., & Miosi, T. (2010). The quality assurance of degree education in Canada. Research in Comparative and International Education, 5(1), 32–57. https://doi.org/10.2304/rcie.2010.5.1.32 Barak, R. J. (2007). Thirty years of academic review and approval by state postsecondary coordinating and governing boards. In State Higher Education Executive Officers. State Higher Education Executive Officers. https://eric.ed.gov/?id=ED502182 Barak, R. J., & Breier, B. E. (1990). Successful program review: A practical guide to evaluating programs in academic settings (1st edition). Jossey-Bass. British Columbia’s Office of the Human Rights Commissioner. (2020). Disaggregated demographic data collection in British Columbia: The Grandmother Perspective. British Columbia’s Office of the Human Rights Commissioner. https://bchumanrights.ca/wp-content/uploads/BCOHRC_Sept2020_DisaggregatedData-Report_FINAL.pdf Brookfield, S. (1998). Critically reflective practice: Journal of Continuing Education in the Health Professions, 18(4), 197–205. https://doi.org/10.1002/chp.1340180402 112 Cardoso, S., Rosa, M. J., & Vidiera, P. (2018). Academics’ participation in quality assurance: Does it reflect ownership? Quality in Higher Education, 24(1), 66–81. Cheng, Y. C., & Tam, W. M. (1997). Multi‐models of quality in education. Quality Assurance in Education, 5(1), 22–31. https://doi.org/10.1108/09684889710156558 College and Institute Act. (1996). https://www.bclaws.gov.bc.ca/civix/document/id/complete/statreg/96052_01 Collier, L. (2019). College costs. CQ Researcher, 29(38), 1–29. Conrad, C. F., & Wilson, R. F. (1985). Academic program reviews: Institutional approaches, expectations, and controversies. ASHE-ERIC Higher Education Report, 2–3. Coombs, V. (2022). Institutions should link program reviews to strategic plans. Inside Higher Ed. https://www.insidehighered.com/blogs/call-action-marketing-andcommunications-higher-education/institutions-should-link-program Creamer, D. G. (2001). Prioritizing academic programs and services: Reallocating resources to achieve strategic balance. The Journal of Higher Education, 72(5), 622–625. https://doi.org/10.2307/2672885 Creamer, D. G., & Janosik, S. M. (1999). Academic program approval and review practices. Education Policy Analysis Archives, 7, 23–23. https://doi.org/10.14507/epaa.v7n23.1999 Creswell, J., & Creswell, J. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th edition). Sage Publications. Creswell, J. W. (2015). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (Fifth edition). Pearson. Creswell, J. W. (2016). 30 essential skills for the qualitative researcher. Sage. 113 Davis, H. P., Biddle, K. S., & Hall, M. R. (2020). Academic program review: Examining the experiences of faculty members serving as internal peer reviewers. Research & Practice in Assessment, 15(2). https://eric.ed.gov/?id=EJ1293307 Davison, D., Patton, J., Eng, M., Hanna, K., Grimes-Hillman, M., Watson, I., Jackson, J., & Vazquez, U. (2009). Program review: Setting a standard. The Academic Senate for California Community Colleges. https://files.eric.ed.gov/fulltext/ED510580.pdf De Valenzuela, J. S., Copeland, S. R., & Blalock, G. A. (2005). Unfulfilled expectations: Faculty participation and voice in a university program evaluation. Teachers College Record, 107(10), 2227–2247. https://doi.org/10.1111/j.1467-9620.2005.00590.x Dickeson, R. (2010). Measuring, analyzing, prioritizing. In Prioritizing academic programs and services (pp. 89–103). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118269541.ch6 Fischer, F., Torgerson, D., Durnová, A., & Orsini, M. (2015). Introduction to critical policy studies. In F. Fischer, D. Torgerson, A. Durnová, & M. Orsini, Handbook of Critical Policy Studies (pp. 1–24). Edward Elgar Publishing. https://doi.org/10.4337/9781783472352.00005 Germaine, R., & Spencer, L. R. (2016). Faculty perceptions of a seven-year accreditation process. Journal of Assessment and Institutional Effectiveness, 6(1), 67–98. https://doi.org/10.5325/jasseinsteffe.6.1.0067 Giroux, H. A., Welsch, J. R., Traverso, M., Wilson, M., Young, J., & Adams, J. Q. (2006). Culture, politics & pedagogy a conversation with Henry Giroux. Media Education Foundation. 114 Gray, J. (2022). Plato’s Ghost: The Modernist Transformation of Mathematics. Princeton University Press. Groen, J. F. (2017). Engaging in enhancement: Implications of participatory approaches in higher education quality assurance. Collected Essays on Learning and Teaching, 10, 89–99. Hail, C., Hurst, B., Chang, C.-W., & Cooper, W. (2019). Accreditation in education: One institution’s examination of faculty perceptions. Critical Questions in Education. https://bearworks.missouristate.edu/articles-coe/133 Harlan, B. (2012). Meta-review: Systematic assessment of program review. In Online Submission. https://eric.ed.gov/?id=ED536464 Harvey, L., & Green, D. (1993). Defining quality, assessment and evaluation in higher education. Quality in Higher Education, 18(1), 9–34. Hoare, A., Dishke Hondzel, C., & Wagner, S. (2022). Forming an academic program review learning community: Description of a conceptual model. Quality Assurance in Education, 30(4), 401–415. https://doi.org/10.1108/QAE-01-2022-0023 Ikenberry, S. O. (2010). Foreword to the first edition. In R. C. Dickeson, Prioritizing Academic Programs and Services: Reallocating Resources to Achieve Strategic Balance, Revised and Updated (2 edition). Jossey-Bass. Jayachandran, J., Neufeldt, C., Smythe, E., & Franke, O. (2019). Practical measures for institutional program reviews: A case study of a small post-secondary institution. Canadian Journal of Higher Education, 49(2), Article 2. https://doi.org/10.47678/cjhe.v49i2.188229 115 Job, P., & Schneider, M. (2014). Empirical positivism, an epistemological obstacle in the learning of calculus. ZDM, 46(4), 635–646. https://doi.org/10.1007/s11858-014-06040 Jones, S. R., Torres, V., & Arminio, J. (2021). Negotiating the Complexities of Qualitative Research in Higher Education: Essential Elements and Issues. Routledge. Kleniewski, N. (2003). Program review as a win-win opportunity. AAHE Bulletin, 55(9). https://www.aahea.org/articles/win-win.htm LaFrance, J., & Nichols, R. (2008). Reframing evaluation: Defining an Indigenous evaluation framework. Canadian Journal of Program Evaluation, 23, 13–31. Lawrence, M., & Rezai-Rashdi, G. (2022). Pursuing neoliberal performativity? Performancebased funding and accountability in higher education in Ontario, Canada. In J. Zajda & W. J. Jacob (Eds.), Discourses of Globalisation and Higher Education Reforms Emerging Paradigms (Vol. 27, pp. 149–165). Springer International Publishing. https://doi.org/10.1007/978-3-030-83136-3_8 Lock, J., Hill, S. L., & Dyjur, P. (2018). Living the curriculum review: Perspectives from three leaders. Canadian Journal of Higher Education, 48(1), Article 1. https://doi.org/10.47678/cjhe.v48i1.187975 Lucander, H., & Christersson, C. (2020). Engagement for quality development in higher education: A process for quality assurance of assessment. Quality in Higher Education, 26(2), 135–155. https://doi.org/10.1080/13538322.2020.1761008 Lueddeke, G. R. (1999). Toward a constructivist framework for guiding change and innovation in higher education. Journal of Higher Education, 70(3), 235–260. 116 McGowan, V. (2019). Not too small to be strategic: The state of academic program review guidelines and instrumentation in public institutions. Administrative Issues Journal Education Practice and Research, 9(1). https://doi.org/10.5929/9.1.1 Merriam, S. B. (1995). What can you tell from an N of 1?: Issues of validity and reliability in qualitative research. PAACE Journal of Lifelong Learning, 4, 51–60. Mussawy, S. A. J., & Rossman, G. B. (2018). Quality Assurance and Accreditation in Afghanistan: Faculty Members’ Perceptions from Selected Universities. Higher Learning Research Communications, 8(2). https://eric.ed.gov/?id=EJ1201352 Pino Gavidia, L. A., & Adu, J. (2022). Critical narrative inquiry: An examination of a methodological approach. International Journal of Qualitative Methods, 21, 160940692210815. https://doi.org/10.1177/16094069221081594 Promoting Excellence: Ontario Implements Performance Based Funding for Postsecondary Institutions. (2020, November 26). News.Ontario.Ca. https://news.ontario.ca/en/release/59368/promoting-excellence-ontario-implementsperformance-based-funding-for-postsecondary-institutions Quality Assurance Process Audit. (n.d.). Government of British Columbia; Province of British Columbia. Retrieved October 7, 2023, from https://www2.gov.bc.ca/gov/content/education-training/post-secondaryeducation/institution-resources-administration/degree-authorization/degree-qualityassessment-board/quality-assurance-process-audit Raffoul, J., Skene, A., Chittle, L., & Kartolo, A. (2023). ‘Accountable to whom, for what, and through what means’: Educational developers in the audit culture. International 117 Journal for Academic Development, 28(3), 258–271. https://doi.org/10.1080/1360144X.2021.2015355 Rockey, M., Georges, C. T., & Bourne, J. (2021). Program review as an opportunity to drive anti-racist change. Pathways to results. Implementation partnerships strategy brief. In Office of Community College Research and Leadership. Office of Community College Research and Leadership. https://eric.ed.gov/?id=ED616285 S.2124 - 116th Congress (2019-2020): Skin in the Game Act (07/16/2019). (2019, July 16). [Legislation]. https://www.congress.gov/bill/116th-congress/senate-bill/2124 Schindler, L., Puls-Elvidge, S., Welzant, H., & Crawford, L. (2015). Definitions of quality in higher education: A synthesis of the literature. Higher Learning Research Communications, 5(3), 3. https://doi.org/10.18870/hlrc.v5i3.244 Senter, M. S., Ciabattari, T., & Amaya, N. V. (2020). Sociology departments and program review: Chair perspectives on process and outcomes. Teaching Sociology, 49(1), 1– 16. https://doi.org/10.1177/0092055X20970268 Shahjahan, R. A., Blanco Ramirez, G., & Andreotti, V. de O. (2017). Attempting to imagine the unimaginable: A decolonial reading of global university rankings. Comparative Education Review, 61(S1), S51–S73. https://doi.org/10.1086/690457 Siedlaczek, K. (2022). Quality assurance in British Columbia higher education: A policy analysis (Doctoral dissertation, University of British Columbia). Skolnik, M. L. (1989). How academic program review can foster intellectual conformity and stifle diversity of thought and method. The Journal of Higher Education, 60(6), 619– 643. https://doi.org/10.2307/1981945 118 Skolnik, M. L. (2016). How do quality assurance systems accommodate the differences between academic and applied higher education? Higher Education: The International Journal of Higher Education Research, 71(3), 361–378. https://doi.org/10.1007/s10734-015-9908-4 Sonday, A., Ramugondo, E., & Kathard, H. (2020). Case study and narrative inquiry as merged methodologies: A critical narrative perspective. International Journal of Qualitative Methods, 19, 160940692093788. https://doi.org/10.1177/1609406920937880 Stake, R. (1978). The case study method in social inquiry. Educational Researcher, 7(2), 5–8. Stake, R. E. (1995). The art of case study research. Sage Publications. Sullivan, P. (2012). Qualitative data analysis using a dialogical approach. SAGE. Vance, M. W. (1955). Evaluation of teacher education programs in the State of Oklahoma / [Thesis, The University of Oklahoma.]. https://shareok.org/handle/11244/137 Vettori, O. (2018). Shared misunderstandings? Competing and conflicting meaning structures in quality assurance. Quality in Higher Education, 24(2), 85–101. https://doi.org/10.1080/13538322.2018.1491786 Vettori, O. (2023). No time for improvement? The chronopolitics of quality assurance. Quality in Higher Education, 0(0), 1–14. https://doi.org/10.1080/13538322.2023.2189454 Wagenaar, T. C. (2015). Effective program review: The lessons I have learned. Footnotes, 43(3). https://www.asanet.org/wp-content/uploads/savvy/footnotes/marchapril15/review_0315.html 119 Wehipeihana, N. (2019). Increasing cultural competence in support of Indigenous-led evaluation: A necessary step toward Indigenous-led evaluation. Canadian Journal of Program Evaluation, 34(2). Whedbee, J. (2009). A narrative analysis using multiple case studies of nursing graduates who overcame academic adversity. Dissertations. https://digitalcommons.andrews.edu/dissertations/1538 Yin, R. K. (2018). Case Ssudy research and applications: Design and methods (Sixth edition). SAGE. 120 Appendices Appendix 1 Thompson Rivers University Ethics Certificate of Approval Claire Sauve From: do-not-reply-TRU@researchservicesoffice.com Cc: Ramirez Gloria(Adjunct Faculty Member); Densky Karen(Adjunct Faculty Member); truromeo@tru.ca; donot-reply-TRU@researchservicesoffice.com REB Approval (COA) Sent: To: Subject: April 19, 2023 12:35 PM Sauve Claire(Primary Investigator) April 19, 2023 Ms. Claire Sauve Faculty of Education and Social Work\Education Thompson Rivers University File Number: 103378 Approval Date: April 17, 2023 Expiry Date: April 16, 2024 Dear Ms. Claire Sauve, The Research Ethics Board has reviewed your application titled 'Exploring Educational Leaders’ Experience of Program Renewal'. Your application has been approved. You may begin the proposed research. This REB approval, dated April 17, 2023, is valid for one year: April 16, 2024. Throughout the duration of this REB approval, all requests for modifications, renewals and serious adverse event reports are submitted via the Research Portal. To continue your proposed research beyond April 16, 2024, you must submit a Renewal Form before April 16, 2024. If your research ends before April 16, 2024, please submit a Final Report Form to close out REB approval monitoring efforts. If you have an award that is contingent on REB approval, then please present this approval to initiate release of funds. If you have any questions about the REB review & approval process, please contact the Research Ethics Office via 250.852.7122. If you encounter any issues when working in the Research Portal, please contact the Research Office at 250.852.7122. Sincerely, Dr. Jennifer Shaw Vice Chair, Research Ethics Board Appendix 2 “City College” Research Ethics Certificate of Approval 121 Research Ethics Board (REB) [City College logo] [email address redacted] Certificate of Approval PRINCIPAL INVESTIGATOR: Claire Sauvé DEPARTMENT: Continuing Studies Ethics Protocol Number: Original Approval Date: Approved On: Approval Expiry Date: 202302-24 April 5, 2023 April 6, 2023 April 5, 2024 PROJECT TITLE: Exploring Educational Leaders’ Experience of Program Renewal RESEARCH TEAM MEMBERS: Gloria Ramirez, Karen Densky (supervisors), TRU DECLARED PROJECT FUNDING: None. Conditions of Approval This Certificate of Approval is valid for the above term provided there is no change in the protocol. Amendments To make changes to the approved research procedure in your study, please submit “Form 3 REB Amendment.” You must receive research ethics approval before proceeding with your amended protocol. Please allow 10-working days for approval. Renewals Your ethics approval must be current for the period during which you are recruiting participants or collecting data. To renew your protocol, please submit a Form 3 REB Amendment” before the expiry date on your certificate. You will be sent an emailed reminder prompting you to renew your protocol about six weeks before your expiry date. Project Closures When you have completed all data collection activities and will have no further contact with participants, please notify the Research Ethics Board by submitting a “Notice of Project Completion” form. 122 Certification This certifies that the VCC Research Ethics Board has examined this research protocol and concluded that, in all respects, the proposed research meets the appropriate standards of ethics as outlined by Vancouver Community College’s policies for research involving human participants. Sincerely, [name of REB Chair and institution redacted] Interim Chair, Research Ethics Board (REB) 1