Many professors dread anonymous Student Evaluations of Teaching (SETs) or Faculty Course Surveys (FCS) as less than stellar assessments could mean the end of their employment or a serious roadblock on their career path. Tenure and promotion applications are most often concerned with two ma

Assignment Task

Arbitration Case – The Ryerson Decision

Many professors dread anonymous Student Evaluations of Teaching (SETs) or Faculty Course Surveys (FCS) as less than stellar assessments could mean the end of their employment or a serious roadblock on their career path. Tenure and promotion applications are most often concerned with two main criteria: scholarship and teaching effectiveness. SETS or FCS are often considered as part of the assessment of an academic’s teaching. This then begs the question of how ‘teaching effectiveness’ can be accurately assessed, in particular, by students’ opinions.

In 2018, the use of SETS was challenged in an arbitration between Ryerson university and its Faculty Association, over which Arbitrator William Kaplan presided.

The Faculty Association and Ryerson Administrators had been at odds in bargaining over the issue since 2003. Facing an impasse, the two sides agreed to the creation of a joint committee and an ongoing pilot project to address concerns about the surveys. Unable to resolve the matter, the Faculty Association filed a first grievance in 2009, then a second in 2015. The matter went to an unsuccessful mediation and proceeded to an arbitration hearing in Toronto in April, 2018..

Ryerson Administrators’ Position

Ryerson argued that although student surveys were not solely determinative of the teaching effectiveness of a faculty member, the questionnaires did allow common issues and concerns to be identified alongside the other methods of evaluation. In addition, Ryerson felt that changes to evaluative tools should be gradual and left to the internal workings of the University to figure out.

Faculty Position

Faculty members had been expressing concerns about the use of student survey data for at least 15 years prior to Kaplan issuing his decision. The Faculty’s position was that the use of scoring averages was ineffective and inaccurate because student surveys failed to provide reliable data. They alleged a significant bias in many of the surveys and even possible violations of the Human Rights Code. Ultimately, they believed that student evaluations had no place in the evaluation of teaching effectiveness.

The  Award

That award/decision meted out by Kaplan, resulted in Ryerson losing the ability to use student evaluations as evidence of a professor’s effectiveness (or lack thereof) in the classroom. The arbitration award rendered in June 2018, generated much interest both in Canada and internationally.

Kaplan weighed the strengths and weaknesses of Ryerson’s student survey system in arriving at his conclusion. In doing so, he relied heavily on the expert evidence of Professors Philip Stark and Richard Freishtat of UC Berkeley. Stark and Freishtat’s evidence was that student surveys were biased based on an array of immutable personal characteristics including race, gender, accent, age and even a professor’s attractiveness. This evidence led Kaplan to conclude that Ryerson’s student surveys were “imperfect at best and downright biased and unreliable at worst.” In his decision, Kaplan noted that student opinion surveys may still have value since they are the main source of information from students about their educational experience. “SET results have a role to play in providing data about many things such as the instructor’s ability to clearly communicate, missed classes made up, assignments promptly returned, the student’s enjoyment and experience of the class, and its difficulty or ease, not to mention overall engagement.” But he cautioned that the data should be “carefully contextualized” and that the “strengths and weaknesses of the SET need to be fully understood.

Impact

University administrations are starting to pay attention. In May, the University of Southern California announced it would stop using student evaluations of teaching in promotion decisions and use the peer-review model instead. The university will still use student evaluations to “provide feedback about students’ learning experiences and to give context, but not as a primary measure of teaching effectiveness during faculty review processes given their as a teaching measure, wrote the vice-provost for academic and dum to the academic senate and faculty council chairs.

 

Reference no: EM132069492

WhatsApp
Hello! Need help with your assignments? We are here

GRAB 25% OFF YOUR ORDERS TODAY

X