PROSPECTIVE TEACHERS PERCEPTIONS AND EXPERIENCES WITH GRADING RUBRICS A SURVEY STUDY

http://dx.doi.org/10.31703/gssr.2023(VIII-II).44      10.31703/gssr.2023(VIII-II).44      Published : Jun 2023
Authored by : Tayyaba Tariq Mir , Amna Arif

44 Pages : 480-491

    Abstract

    Grading rubrics are valuable tools in education that clarify expectations, facilitate transparent grading, improve self-regulation, and increase learning outcomes. This paper explores the use and impact of grading rubrics from the perspective of prospective teachers, reviewing literature, research, and a survey. The survey was done with 400 prospective teachers from 4 universities in Lahore, and each of them filled out a questionnaire about their thoughts and experiences with rubrics. The findings of this survey are included in the paper as well. This work contributes to the existing body of research on the use of rubrics in education by giving a detailed and critical evaluation of the research that has already been conducted, as well as by presenting fresh empirical data from the viewpoint of prospective teachers, thus intending to stimulate more research and practice on grading rubrics in the field of education.

    Key Words

    Grading Rubrics, Education, Prospective Teachers, Self-assessment, Peer-assessment

    Introduction

    In teaching and learning, feedback is vital because it reveals strengths and weaknesses and provides direction for progress. However, providing and receiving feedback in big classrooms or for complicated tasks might be difficult. Furthermore, the impacts of feedback on student motivation and learning outcomes rely on its quality, timeliness, and form of delivery (Earl & Timperley, 2008).

    Grading rubrics improve the efficacy of feedback by clarifying objectives and criteria for assignments or examinations and by specifying accomplishment levels clearly (Muzammil & Romatoski, 2020). Grading rubrics enable instructors and students to convey task expectations, support open grading methods, and promote self-regulation and self-evaluation (Kiruthika, Adrian, & Adrian, 2020). In addition, they give feedback to students; thus, boosting the total efficiency and efficacy (Reddy & Heidi, 2010).

    This study examines the effect of grading rubrics on prospective teachers, concentrating on their views and experiences. These students were enrolled in a teacher education program and were preparing to become teachers. Exposure to rubrics in courses might affect students’ attitudes, actions, and intentions to use them in their own classrooms (Omer, Safiye, and Ozen, 2010). Examining how prospective teachers view and use rubrics for their own learning and how they intend to use them in their future teaching is essential.

    This paper begins with a literature review on the effectiveness of rubrics, prospective teachers' perceptions, and preparation for the use of rubrics, and research on the preparedness and usage of rubrics by prospective teachers in Pakistan. This article next discusses the outcomes of a survey conducted at four institutions in Lahore (Punjab), Pakistan, with four hundred prospective teachers who were given a questionnaire on their perceptions and experiences with rubrics. 

    Effectiveness of Rubrics

    For decades, rubrics have been widely used at primary, secondary as well as post-secondary, and higher levels of education (Brookhart & Chen, 2015). Ragupathi and Lee (2020) emphasize that “students need to have information about the quality of their work while they work on their assessment tasks and need to comprehend what constitutes good performance a visual narrative that breaks down the assignment into parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery” (p. 73). In addition, Andrade (2005) asserts that “a rubric that is co-created with students; handed out; used to facilitate peer assessment, self-assessment, and teacher feedback; and only then used to assign grades is an instructional rubric. It is not just about evaluation anymore; it is about teaching. Teaching with rubrics is where it gets good.” 

    Rubrics, on the one hand, are a support system for teachers in developing a measuring tool for the unbiased and unprejudiced measurement of students’ learning and skills from the development phase till effective execution. Rubrics also facilitate them in providing timely and individualized productive feedback (Andrade, 2005). The intention is to maintain the level of quality as foreseen in the intended learning outcomes. On the other hand, rubrics also train, prepare, and enable the learners for self-evaluation and self-assessment regarding the quality of education they receive and execute. They also help the learners to be accountable for their learning and enhancement of skills. Jeong (2015) quoted Wolf & Stevens (2007), that rubrics are effective, especially in the language or literacy subject(s), while having a major share in students’ general to overall learning with the provision of a transparent assessment process. Rini and Purnawarman (2019) are of the view that rubrics are beneficial not only in the sense that the learners receive “fast and accurate feedback” but also in the sense that both the learners and the teachers can acquire “effective reflection tools/ guidelines relevant to the learning process” (pp. 361-362).


    Prospective Teachers’ Perception and Readiness for the Use of Rubrics

    Studies around the globe have been insisting for decades upon the readiness of prospective teachers in terms of their knowledge and skills for deciding upon the appropriate and effective assessment strategies and tools, such as rubrics to align their future students’ in-class learning activities with learning performances to measure the level(s) of intended learning outcomes achieved. It has been observed that the teacher educators and/or schools of education mostly include instructions and activities when training the student-teachers about effective and appropriate use of the derived data on student’s learning performance in a course autonomously. Rather, they should allow them to practice its effective integration into ongoing instructional processes (Bocala & Boudett, 2015).

    An effective and well-prepared teacher and/or assessor being aware of the effective and efficient use of rubrics before, during, and after the teaching process, their instructions not only help them focus on the delivered set of instructions but also relay teacher expectations to the learners for the specific learning performances (Andrade, 2005, p. 27). This becomes reciprocal in the sense of becoming a guideline for learners working individually with a better understanding of the expected learning outcomes that are needed to attain through the given assignment or project-their learning performance or task. In this regard, Reynolds-Keefer (2010) found that the research on the practical use of rubrics made the initial teachers much more aware, conscious, confident, and positive towards their use for efficient time and resource management but lacked in understanding it as a tool for reflection on learning performance. Their reluctance towards the teacher-designed rubrics, their effectiveness for younger learners, and some specific subjects only highlight the need for their preparedness in quality time allocation. The execution of instructional activities and tasks (p.6) ‘on-site’ is of much importance. 

    Oo, Alonzo, and Davison (2021) mentioned the prospective teachers’ positive or negative beliefs and values that play a pivotal role in improving or decreasing their readiness level for the use or lack of appropriate assessment strategies for the ultimate goal of ‘learning improvement.’ They brought to attention the crucial point by presenting their research analysis that some of the prospective or prospective teachers may find being unmotivated to implement assessment strategies suitably because of the negative impact they have received from their supervisor teachers’ negative, rather “controlling effect” such as “critical comments and/ or “bad relationship with them” (p. 7). Teng (2017, p. 129) also blamed the “unequal power relationships” between the supervisor teachers and the prospective teachers as the root cause of their helpless inclination towards de motivation and negativity. It also stops the way of fresh, gushing, and innovative ideas and concepts to be shared by novice teachers to bring a positive change and/ or improvement in the conventional trends. In contrast, a good learning environment demonstrates positivity (Alvarez-Bell, Wirtz, & Bian, 2017, p. 8), encouragement, and welcoming response to the innovative suggestions put forward by novice teachers.

    Hejji Alanazi (2019) claimed that the prospective teachers have voted strongly for a need in their preparedness and being well-versed in designing and relaying information on rubrics for their planned learning activities to gauge students’ learning performances. However, when it comes to the application of assessment methods and tools, such as the designing and implementation of rubrics, many prospective or per-service teachers seem to lack this fundamental skill. Many of them are not even well aware of the alternative assessment methods Sari (2021) in the first place besides traditional assessment methods. This breaks apart the trio of evaluation and assessment part from that of the active learning activities and the intended learning outcomes. 

    Three major factors have been observed by Oo et al. (2021), which impact the prospective teachers’ decision-making for in-class assessments. These factors are categorized into three dimensions; the “international dimension,” prospective teachers’ knowledge about assessment. The “projective dimension;” and their views and ethical intention for using the assessment. And the “practical evaluative dimension” includes the sub-factors such as their supervisors, learners’ feedback in the form of timely and specific responses, and classroom demands and learning culture. Hence, they see the possible progress in assessment decision-making by the prospective teachers through continuous and quality-based mutual collaboration between the ‘teacher-training institutes’ and those of school-practicum schools.

    Different contexts require different kinds of language assessment activities to be organized and administered by teachers (Mede & Atay, 2017). Furthermore, they are required to deal with procedures and policies associated with local and external testing. Language teachers’ ability to test and assess their students has been highly questioned, despite the importance given on a global scale. Many language tests prepared by teachers are of poor quality, according to their research. 

    Oftentimes, rubrics are interpreted by teachers as merely a compilation of interpretations and perhaps even as a series of questions. At its finest, a rubric reflects a teacher’s work and professional judgment. As such, the rubric defines learning objectives and performance standards. The criteria for student work are listed in a rubric, along with the level of quality for each criterion. Each part of the assignment is presented as a visual narrative with clear descriptions of the characteristics of that part’s work. Various levels of mastery are evident in these descriptions. Teacher advocates of rubrics report that they have a significant impact on fostering and enabling reflective practice, in addition to making assessments fair, transparent, and consistent. Those who advocate rubrics assert that they can provide insight into their teaching effectiveness. Therefore, Sanger and Nance (2020) suggested that there are numerous ways through which rubrics can enhance instructional quality and can be developed to be excellent “instructional illuminators.”

    Mede and Atay (2017) magnified the inconsistency between raters, or rater variability, which can take many forms. Different raters may hold different opinions (a) about the extent to which they adhere to the scoring rubric, (b) about the interpretation of criteria used in operational scoring sessions, (c) about how severe or lenient they have been when scoring examinee performances, (d) how well they understand and use rating scale categories, and (e) how consistent their ratings are across examinees, scoring criteria, and performance tasks. This study examines how raters interpret scoring criteria when they are evaluating foreign language writing. A range of factors would impact the rater variability, including the interpretation of criteria used in essay scoring by trained, experienced raters. The importance that raters accorded to scoring criteria would distinguish raters into groups (classes, clusters).

    There has been a considerable influence on prospective teachers’ professional development and identity development by the emotional experiences they had in the field school. It was noted that special attention was paid to the diverse emotions prospective teachers experience during their teaching practicum and the relationship between these emotions and their various identities (for example, noncooperative teachers, outsiders, supportive teachers, agentive teachers, and insecure teachers). Following the practicum, Weigle (1994) identified five major themes that represented the prospective teachers’ emotional experiences and inner thoughts. (a) Anxiety: Dealing with unruly students makes it difficult to become a professional teacher (b) Determination: “I refuse an asymmetrical power relationship.” (c) Uncertainty: “Do I qualify to teach?” (d) Disillusionment: “I don’t have a voice.” (e) Joy: “I appreciate what students are doing.”

    The use of rubrics by teachers and educators can also be beneficial in many ways. Firstly, a rubric may be used in many different contexts, including virtual learning, essays, lab work, presentations, exhibits and performances, student portfolios, artwork, and even internships. Rubrics should not only be assessed just as a scoring medium but they should also be viewed as primary instructional resources. Secondly, some of the advantages of using rubrics are they carry more accurate, impartial, and consistent evaluations. Some teachers also enlist the help of their assistants for grading test papers. In the absence of explicit performance criteria in rubrics, it gets difficult to maintain the validity and reliability of the scoring process. Furthermore, rubrics hold a meaningful purpose for teachers as they help in creating appropriate learning settings and evaluation strategies. It also assists educators in identifying inconsistencies and curricular gaps (Nkhoma, Nkhoma, Thomas, & Le, 2020).

    A continuous remainder is given by the teachers to their students for the improvement of English, which is considered the second language for them. It is one of the crucial duties of English teachers from their job perspective (Brooks, 2013). It is done through feedback and corrections. It is specifically good for written classes of English. It takes a significant amount of time and effort for teachers to provide feedback to their writing students, both inside and outside the classroom. Teachers often use tools like rubrics for the rationalization of the feedback process as it allows them to give feedback to the students. Rubrics, traditionally, have been regarded as tools that can “improve and enhance consistency of scoring, provide the possibility for facilitating valid judgment of complex skills, and promote learning.” However, some significant issues have been considered in current studies.

    In the language classroom, rubrics seem to be useful. The preparation of relevant tasks, the setting of a clear topic and prompts, the assistance of students in choosing rhetorical modes, and ample time to complete their writing task are significant elements of a successful English writing program, along with setting appropriate scoring criteria and ensuring that scores are valid and reliable. Teachers can accomplish this goal by using rubrics, which set clear criteria regarding written work that are relevant to both teachers and students.

    Prospective Teachers’ Readiness and Use of Rubrics in Pakistan

    Pakistan, though, is a developing country, yet their financial budget for education focuses the same emphasis on education standards and quality when compared with the developed countries globally, especially in the private sector and, more recently, in the public sector as well. Apart from the remarkable and extensive work on pedagogy and instructional methods, the assessment methods and techniques, however, face the conventional style, which includes question answers, objective type questions, oral presentation, and homework of students, etc. (Anwar, Muhammad, & Bokhari, 2022; Imran, Muhammad, & Waqar, 2022; Pervez, Muhammad, & Waqar, 2022). The next section provides some studies illuminating these claims.

    Saeed, Tahir, and Latif (2018) interviewed teachers in Lahore regarding their views on using classroom assessment techniques. Some assessment techniques, such as formative, summative, peer, and portfolio assessments, were used by teachers which ensure that students in public and private institutions will benefit from more effective and holistic development. However, the former two assessments, (1) formative and (2) summative, have been frequently used by teachers. They used summative assessment to promote students’ learning in the classroom as it plays a more pivotal role. Whereas the other two are seen as alternative assessments by teachers (Saeed et al., 2018).

    Gul, Tharani, Lakhani, Rizvi, and Ali (2016) investigated the perception of teachers’ assessment skills and practices due to the scarcity of studies on this topic. Their study also endorsed diagnostic assessments used by teachers to focus on grade school students. The evaluators defined the importance of assessment by explaining how it reinforces prospective teachers’ and students’ performance in the educational sector by inflating their overall learning development through comprehensive information. However, Evaluation regimes have a significant impact on teachers’ lives and learning. This is illustrated by the TALIS survey in 2008, which assessed how teacher appraisal and feedback affected teachers. Averaging 41% of all teachers’ evaluations emphasized getting students to perform better on their tests (Gul et al., 2016). In another study, the students at Pakistani schools were asked to write essays in the research of Farooq, Ahmed, and Farooq (2020). The purpose was to assess those essays by both teachers and students. Two rounds were conducted for writing essays. The first round consisted of the instruction given by researchers; however, in the second round, students were asked to write essays without any instruction, but fifteen days of practice were given to them. After both rounds, the results were assessed with the help of the rubric tool. 

    An analysis of continuous class assessments at the primary level using rubrics has been conducted by Ali, Shah, and Gujjar (2014) in Pakistan. Study results revealed that student performance in individual class teachers had almost average student achievement in English, General education, Urdu, and mathematics, and rubric observations of continuous classroom assessments rated individual class teachers as fair. For the evaluation of class teacher in class assessment, various sections were observed by rubric. Those sections or areas consisted of instruction for students, class environment, tests conducted on the writing board, records of students that were kept in and after the assessment period, and instructional resources. The rubrics showed the performance of the teachers was very good. To teach students more effectively, they recommended that at the primary level, there’s a need for those teachers who teach specialized subjects. Primary school teachers need to be accustomed to updated syllabus terminology. For example, Student Learning Outcomes (SLO) is used in the 2006 syllabus, and training is required for them to use assessment tools like rubrics in the classroom.

    Qasim and Qasim (2015) claimed that writing teachers frequently use rubrics to judge student writing. Rubrics are lists of standards and instructions that specify how a student will be graded. Rubrics are intended to eliminate the grading of writing and give students useful criticism for growth. The benefits of utilizing rubrics for writing evaluation, however, are a matter of disagreement for some English teachers. For instance, they disagree that the rubrics’ criteria are important. Qasim and Qasim (2015) found out that Pakistani university teachers, who teach literature, give substantial importance to elaboration and detail. Teachers believed that a rubric was a useful tool for evaluating both teachers’ and students’ progress. To give students room for reflection and independent learning, the researchers advise that the rubrics for writing assessments incorporate authentic criteria. To extend to professionally important areas of teaching, there should be an open collaboration between language teachers, subject teachers, and literary instructors (Qasim & Qasim, 2015). 

    Methods

    The purpose of this quantitative survey-based study is to investigate and determine the prospective teachers' level of receptiveness, their perception, and application of feedback received through the grading rubrics shape and direct their personal experiences during their study programs. Also, this study aimed to determine their readiness and expertise in practising this assessment tool in their classrooms and/or future classrooms. It was also important to know the quality level of the content and practice on grading rubrics as feedback available to them along with the level of expertise and skillfulness of their lead teachers and/or trainers in demonstrating and evaluating student teachers’ learning performance for this phenomenon. 

    The research questions investigated for this study were:

    1. How do university graduate students receive and apply the feedback from rubrics during their course of study?

    2. How do the prospective teachers perceive the use and effectiveness of rubrics?

    3. To what extent are the prospective teachers confident during their study on their supervisor teacher(s)’ expertise for teaching about the designing and application of rubrics?

    4. To what extent the prospective teachers are ready to create and implement the teacher-made rubrics? 

    5. How do the prospective teachers intend to practice the ‘adopted and adapted’ rubrics in their future classrooms?

    6. To what extent are the prospective teachers satisfied with the quality of the learning material on rubrics during their course of study?


    Research Design

    This research has been conducted through a survey design survey for a quick collection of data but mainly focusing on reaching a large number of participants and managing the time and the quality of the research efficiently through Google Forms and face-to-face (Akram, Butt, & Muhammad, 2022; Creswell & Poth, 2018). The pilot study was conducted first, as a pilot plan, on a sample of one hundred participants through the snowball sampling technique to fill up the questionnaire mainly through google forms due to the aftereffects of Covid-19


    Participants, Settings, and Data Instrument

    For the quantitative data collection, participants were selected through convenience sampling with a sample of four hundred prospective teachers enrolled in the education department of four universities in Lahore (Punjab). The data was collected in quantitative data collection through a survey with analysis on the twenty-one closed-ended questions with provided options to select, generally ranging from “receptive” to “very unreceptive” on the Likert Scale (Batterton & Hale, 2017) along with other variations in options to be selected as per the nature of the questions asked. 

    Data Collection

    For the quantitative data collection, a survey with analysis was conducted with the questionnaire comprising twenty-one closed-ended questions with provided options to select, generally ranging from “receptive (1)” to “very unreceptive (4)” on the Likert Scale (Likert, 2017). Other variations in provided options to select were as per the nature of the questions asked, i.e., for the question: “The reason(s) you used/will use grading rubrics in your classroom/ future classroom was / will be: (select that apply),” options were given as: “unsatisfactory (1)” to “excellent (4)”, “because I was taught to use rubrics (1), “to benefit the students by giving rubrics as a guide (2)”, “the ease and efficiency of rubrics (3)”, “my school is data-driven, and I use rubrics to collect data (4)”, and “I do not use rubrics (5)”. 

    A question was asked to investigate their level of receptiveness of feedback through grading rubrics during their study program: “What is the first thing you reviewed when receiving a graded rubric?” with provided options, i.e., “teacher’s comments (1)”, “grades (2)”, “my mastery level within the rubric (3)”, and “I don’t care about the rubric (4).” For the question “How did/ do you intend to design your grading rubrics (i.e., online rubrics, rubrics included in a curriculum, etc.),” options were: “I create/ will create my own rubrics only (1)”, “I use/ will use premade rubrics only (2)”, and “I use/ will use both my own designed and premade rubrics (3).” Other closed-ended questions were provided with mainly two options, i.e., “yes (1)” and “no (2).” 


    Data Analysis

    The quantitative data were analyzed for descriptive analysis through SPSS under the factors derived from the variables of the study. Mean was calculated for the twenty-one closed-ended questions based on the receptiveness, perception, Learning, and practice on the part of the prospective teachers and how they perceive the quality of the available learning material, opportunities of practice, and the skillfulness and expertise of their lead teachers/ trainers in relation to the grading rubrics as feedback. A one-way ANOVA test was conducted to check the dependency of these variables on certain other variables, such as the degree program the prospective teachers were enrolled in, the semester/year of the degree program enrolled in, and previous teaching experience(s). 

    Results

    For the research at hand, four hundred student teachers or prospective teachers were approached to complete the survey for the data collection process. In the questionnaire, they were asked about their degree program, semester or year of their degree program enrolled in, and any previous teaching experience they have had so far in terms of “home tuition” valued as (1), “internship” valued as (2), or “no teaching experience” at all valued as (3). Those having no teaching experience previously were kept in the research to explore their level of acceptance and inclination towards the application of grading rubrics for effective feedback to their future learners. The total response to the question on gender identification was 387, out of which 12.5% were males and 84.3% were females.

     

    Table 1

    Internal Consistency of Questionnaire Using Cronbach Alpha

    Cronbach’s Alpha

    No. of Items

    .788

    21

     

    Table 1 shows that the overall reliability of the instrument was .788 with 21 items, and it is considered an acceptable reliability level.

     

    Table 2

    Prospective Teachers’ Gender Information (Demographical Variables)

    Gender

    n

    %

    Male

    50

    12.5

    Female

    337

    84.3

    Not Disclose Gender

    13

    3.3

    Total

    400

    100.0

     

    Table 2 shows the prospective teachers’ gender or demographical variables information. There were males (f=50, 12.5%), females (f=337, 84.3%), and those who did not disclose their gender

    (f=13, 3.3%) respondents or prospective teachers, enrolled in the education department of four universities in Lahore (Punjab).

     

    Table 3

    Previous Teaching Experience of Prospective Teachers’ Information (demographical variables)

    Previous Teaching experience

    n

    %

    None

    104

    26.0

    Home Tutoring

    111

    27.8

    Internship Program

    144

    36.0

    Teaching in School

    33

    8.3

     Not Disclose Experience

    8

    2.0

    Total

    400

    100.0

     

    Table 3 shows the previous teaching experience of prospective teachers enrolled in the education department of four universities in Lahore (Punjab) or demographical variables information. There were (f=104, 26%) prospective teachers with no previous experience of teaching, while there were (f=111, 27.8%) prospective teachers having home tutoring teaching experience, while there were (f=144, 36%) prospective teachers with experience of internship program and there were (f=33, 8.3%) prospective teachers with experience of teaching in school, while there were (f=8, 2%) respondents of this research study didn’t disclose their teaching experience.

     

    Table 4

    Descriptive Statistics of Responses by Prospective Teachers about how often Grading Rubrics were used to Provide Feedback on Grades by their Teachers

     

     

    M

    SD

    How often were grading rubrics used to provide feedback on your grades?

    398

    2.16

    .878

    Total

    398

     

     

     

    Table 4 shows descriptive statistics (M=2.16, SD= .878) of responses by prospective teachers about how often grading rubrics were used to provide feedback on grades by their teachers. There were (N=400) total respondents, while two respondents did not respond to this statement.


    Table 5

    ANOVA for mean Comparison of when they were asked about their Responsiveness to Receiving Grades (Low or High) or even no Grades during their Degree Program

     

    Sum of Squares

    df

    Mean Square

    f

    p

    When you received a rubric with no letter grade (for example, A, B, C, D, or F) on it, how receptive were you to the teacher’s feedback?

    Between Groups

    120.932

    7

    17.276

    33.873

    .000

    Within Groups

    188.198

    369

    .510

     

     

    Total

    309.130

    376

     

     

     

    When you received a high grade (example A), how receptive were you to the teacher’s feedback?

    Between Groups

    72.306

    7

    10.329

    18.049

    .000

    Within Groups

    211.175

    369

    .572

     

     

    Total

    283.480

    376

     

     

     

    When you received a low grade (for example, D), how receptive were you to the teacher’s feedback?

    Between Groups

    189.468

    7

    27.067

    42.505

    .000

    Within Groups

    234.978

    369

    .637

     

     

    Total

    424.446

    376

     

     

     

     


    Table 5 shows the result of conducting one-way ANOVA for the mean comparison of prospective teachers’ responses when they were asked about their responsiveness to receiving grades (low or high) or even no grades during their degree program. Results show that there was a significant mean difference between the responses of prospective teachers’ regarding when they received rubrics with no letter grade (for example, A, B, C, D, or F) on teacher feedback (f=33.873, df=7, p .000). It is concluded that there was a significant mean difference between the responses of prospective teachers’ regarding when they received rubrics with no letter grade (example A, B, C, D or F) on teacher feedback.

    Results show that one-way ANOVA was conducted for mean comparison of prospective teachers’ responses regarding when they receive high grades (example A) on teacher feedback. Results show that there was significant mean difference between the responses of prospective teachers’ regarding when they receive high grades on teacher feedback (f=18.049, df=7, p .000). It is concluded that there was significant mean difference between the responses of prospective teachers’ regarding when they receive high grades (example A) on teacher feedback.

    Results show that one-way ANOVA was conducted for mean comparison of prospective teachers’ responses regarding when they received a low grade (example D) on teacher feedback. Results show that there was significant mean difference between the responses of prospective teachers’ regarding when they receive low grades on teacher feedback (f=42.505, df=7, p .000). It is concluded that there was significant mean difference between the responses of prospective teachers’ regarding when they receive low grades (example D) on teacher feedback.


     

    Table 6

    Descriptive statistics of views or Responses of Prospective Teachers’ when they were asked about their Responsiveness to Receiving Grades (low or high) or even no Grades during their Degree Program

     

    n

    Minimum

    Maximum

    M

    SD

    When you received a rubric with no letter grade (for example, A, B, C, D, or F) on it, how receptive were you to the teacher’s feedback?

    380

    1

    4

    1.70

    .905

    When you received a high grade (example A), how receptive were you to the teacher’s feedback?

    377

    1

    4

    1.96

    .868

    When you received a low grade (for example, D), how receptive were you to the teacher’s feedback?

    380

    1

    4

    2.17

    1.063

     


    Table 6 shows the responses of prospective teachers about their responsiveness to receiving grades high or low by their teachers in the form of feedback. Results show that when prospective teachers receive low grades (M=2.17, SD= 1.063), they responded by highly unreceptive on receiving low grades (example D), while when prospective teachers receive high grades (M=1.96, SD= .868), they responded by highly receptive on receiving high grades (example A) and when prospective teachers, receive a rubric with no letter grade (M=1.70, SD= .905) they were responded by receptive on receiving rubric with no letter grade (example A, B, C, D or F).


     

    Table 7

    Descriptive Statistics of views or Responses by Prospective Teachers’ Regarding using Rubrics in their own Classroom/Future Classroom by them

     

    n

    Minimum

    Maximum

    M

    SD

    The reason(s) you used/will use grading rubrics in your classroom/ future classroom was/will be: (select that apply)

    390

    1

    5

    2.17

    1.188

    How did/do you intend to design your grading rubrics (i.e., online rubrics, rubrics included in a curriculum, etc.)

    393

    1

    3

    2.00

    .866

    Do you provide/intend to provide grading rubrics to students in advance as a guide to your assessments/future assessments?

    394

    1

    2

    1.13

    .348

    Did you feel that your previous experience with grading rubrics at the university affect/may affect your use of rubrics in your classroom/ future classroom?

    395

    1

    2

    1.41

    .526

     


    Table 7 shows the responses of prospective teachers regarding using rubrics in their own classroom/future classroom by them. Results show that the reasons that prospective teachers will use rubrics in their future classroom (M=2.17, SD= 1.188) were to provide benefit the students by giving them the rubrics as a guide, while when they were asked about their intention to design grading rubrics, i.e., online rubrics, rubrics included in a curriculum, etc. (M=2.00, SD= .886), they responded that they would use premade rubrics only. When prospective teachers were asked about their intentions to provide grading rubrics to students in advance as a guide to their assessments/future assessments (M=1.13, SD= .348), they responded that they would certainly provide grading rubrics to students in advance as a guide. When they were asked about their previous experience with grading rubrics at the university level (M=1.41, SD= .526), they responded that, indeed, previous experience with grading rubrics at the university level would affect their use of rubrics in their future classroom.


     

    Table 8

    ANOVA for mean Comparison between the Level or Quality of the Learning Material (Content) Taught and the Quality of Practice on Rubrics during their Study Program

     

    Sum of Squares

    df

    Mean Square

    f

    p

    What level do you feel was of the content (theory) on rubrics in your study program?

    Between Groups

    7.784

    4

    1.946

    3.153

    .015

    Within Groups

    207.395

    336

    .617

     

     

    Total

    215.179

    340

     

     

     

    At what level do you feel the skill (practice) of rubrics in your study program was applied?

    Between Groups

    10.980

    4

    2.745

    3.451

    .009

    Within Groups

    268.026

    337

    .795

     

     

    Total

    279.006

    341

     

     

     

     


    Table 8 shows that one-way ANOVA was conducted for the mean comparison of the level or quality of the learning material (content) taught and the quality of practice on rubrics during their study program. Results show that there was no significant mean difference between the responses of prospective teachers regarding what level did they feel was of the content theory on rubrics in their study program (F=3.153, df=4, p .015). Results also show that there was no significant mean difference between the responses of prospective teachers regarding what level did they feel was of the skill practice of rubrics in their study program was applied (F=3.451, df=4, p .009).


     

    Table 9

    Descriptive Statistics about the views or Responses of the Level or Quality of the Learning Material (Content) Taught and the Quality of Practice on Rubrics during their Study Program

     

    n

    Minimum

    Maximum

    M

    SD

    What level do you feel was of the content(theory) on rubrics in your study program?

    379

    1

    4

    2.34

    .807

     

    What level do you feel of the skill(practice) of rubrics in your study program was applied?

    380

    1

    4

    2.43

    .920

     


    Table 9 shows descriptive statistics of views or responses by prospective teachers about the level or quality of the learning material (content) taught and the quality of practice on rubrics during the study program. Results show that when prospective teachers asked about what level they felt was of the content theory in their study program (M=2.34, SD= .807), they responded to ‘satisfactory’ level or quality of the learning material taught on rubrics during the study program and when they were asked about what level did they feel was of the skill of rubrics in their study program (M=2.43, SD= .920), they responded to ‘satisfactory’ level or quality of practice rubrics in their study program.

    Conclusion

    The purpose of this paper was to investigate the use of grading rubrics in education as well as its influence from the viewpoint of prospective teachers. In the study, a literature review was conducted on the effectiveness of using rubrics, as well as research on the use of rubrics in education in Pakistan. A study was done with 400 prospective teachers from 4 universities in Lahore (Punjab). The findings of this article assert that the prospective teachers were very unreceptive to receiving low grades on their learning performance and assessment during their study program. On the contrary, when they were awarded high grades on their learning performance, they demonstrated more interest and focus in further learning. This further supported the basis of the intention to use rubrics in their future classrooms. However, they showed reluctance in developing their own grading rubrics to benefit their future students and were more inclined towards using pre-made rubrics only. It strengthens the argument that prospective teachers lack knowledge and experience with rubrics in their study program(s). They were also unaware of the required/suitable level or quality of content and practice on rubrics as they could not be benefited from a good quality of content and hands-on practice on rubrics. This work contributes to the existing body of research on the use of rubrics in education by giving a detailed and critical evaluation of the research that has already been conducted, as well as by presenting fresh empirical data from the viewpoint of prospective teachers, thus intending to stimulate more research and practice on grading rubrics in the field of education.

References

  • Akram, S., Butt, S., & Muhammad, Y. (2022). Elementary school teachers’ perspectives on the science textbook. Global Educational Studies Review, 7(3), 24–33.
  • Ali, I., Shah, S. M. H., & Gujjar, A. A. (2014). Continuous classroom assessment at the primary level. Journal of Educational Technology, 11(1), 15-21.
  • Alvarez-Bell, R. M., Wirtz, D., & Bian, H. (2017). Identifying Keys to Success in Innovative Teaching: Student Engagement and Instructional Practices as Predictors of Student Learning in a Course Using a Team- Based Learning Approach. Teaching & Learning Inquiry: The ISSOTL Journal, 5(2).
  • Andrade, H. (2005). Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53(1), 27–31.
  • Anwar, S., Muhammad, Y., & Bokhari, T. B. (2022). Teachers’ intentions & challenges concerning e-assessment at the virtual university of Pakistan: A phenomenological study. Research Journal of Social Sciences & Economics Review, 3(2), 92-101.
  • Batterton, K. A., & Hale, K. N. (2017). The Likert Scale What It Is and How To Use It. Phalanx, 50(2), 32–39.
  • Bocala, C., & Boudett, K. P. (2015). Teaching Educators Habits of Mind for Using Data Wisely. Teachers College Record, 117(4), 1– 20.
  • Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368.
  • Brooks, G. (2013). Assessment and Academic Writing : A Look at the Use of Rubrics in the Second Language Writing Classroom.Kwansei Gakuin University Humanities Review, 17, 227–240.
  • Creswell, J. W. (2013). Qualitative inquiry & research design : choosing among five approaches. In Sage eBooks.
  • Earl, L., & Timperley, H. (2009). Professional Learning Conversations: Challenges in Using Evidence for Improvement. In Springer eBooks.
  • Farooq, M., Ahmed, K., & Farooq, S. (2020). Introducing self-assessment for evaluating learners in Pakistan. Global Social Sciences Review, V, 120-136.
  • Gul, R. B., Tharani, A., Lakhani, A., Rizvi, N. F., & Ali, S. K. (2016). Teachers’ perceptions and practices of written feedback in higher education. World Journal of Education, 6(3), 10–20.
  • Hejji Alanazi, M. (2019). A study of the preservice trainee teachers’ problems in designing lesson plans. Arab World English Journal, 10(1), 166–182.
  • Imran, A., Muhammad, Y., & Waqar, Y. (2022). Prospective teachers’ conceptions of reflective teaching: A qualitative study. Research Journal of Social Sciences & Economics Review, 3(3), 75-82.
  • Jeong, H. (2015). Rubrics in the classroom: do teachers really follow them? Language Testing in Asia, 5(1).

Cite this article

    CHICAGO : Mir, Tayyaba Tariq, and Amna Arif. 2023. "Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study." Global Social Sciences Review, VIII (II): 480-491 doi: 10.31703/gssr.2023(VIII-II).44
    HARVARD : MIR, T. T. & ARIF, A. 2023. Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study. Global Social Sciences Review, VIII, 480-491.
    MHRA : Mir, Tayyaba Tariq, and Amna Arif. 2023. "Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study." Global Social Sciences Review, VIII: 480-491
    MLA : Mir, Tayyaba Tariq, and Amna Arif. "Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study." Global Social Sciences Review, VIII.II (2023): 480-491 Print.
    OXFORD : Mir, Tayyaba Tariq and Arif, Amna (2023), "Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study", Global Social Sciences Review, VIII (II), 480-491
    TURABIAN : Mir, Tayyaba Tariq, and Amna Arif. "Prospective Teachers' Perceptions and Experiences with Grading Rubrics: A Survey Study." Global Social Sciences Review VIII, no. II (2023): 480-491. https://doi.org/10.31703/gssr.2023(VIII-II).44