Abstract
Language is product of four skills: reading, writing, listening and speaking. English language learning and examination has been reduced to reading and writing in Pakistan, at the University level. However, English Speaking Skills (ESS) are in high demand in professional life. ESS require testing and grading like English writing skills. This study is based on developing ESS through testing criteria. A survey of university freshmen was used to collect data. Using Kim’s (2010) testing scales, the freshmen’s ESS progress was gauged through their speaking performances. As a case study, this research used a longitudinal approach (two academic semesters) with a mixed methods approach. University English Language Teachers’ (UELTs) and University Administrators /Management’s (UA&M) interviews were analyzed textually. A criterion as a yardstick helped the learners to fit in to the optimum.
Key Words
Testing Scales, Developing Oral Skills, Recorded Speaking Performances, Pakistan
Introduction
Being the widely used international and the official language of Pakistan, English enjoys a high status (Haidar, 2018; Jafri, Zai, Arain, & Soomro, 2013; Rassool, 2013). The global (Ntshuntshe, 2011) growth has transformed English to the language of International Capitalism (Pennycook, 1995). Commercial and official transactions, planning, coordination, even letter-writing and day-to-day written communication, is often done in English in Pakistan (Haidar, 2018; Ntshuntshe, 2011). The use of English is no longer limited to written communication as it was initially happening in the colonial times, but the need of spoken communication is also soaring. Spoken communication has a coordinating role in the learning process (Hall, 1993; Wilkinson, 1970). Asking a question or raising another option might position the University Freshman (UF) to augment a point in a classroom environment. Thus, developing English-Speaking Skill (ESS) and confirming the development of language through a criterion rather than assuming language learners’ communicative competence is the need of the day.
However, in instruction the use of English language is largely confined to reading and writing in Pakistan (Jabeen, 2013; Zulfiqar, 2011). In classroom, speaking is usually done either in the national language (Urdu) or in a regional language (García, 2011). In large classrooms, teachers use English for instructions, students listen without asking a question or demanding an explanation (Ntshuntshe, 2011). The learners keep receiving information without having a hands-on practice in speaking. Their exposure to English language is limited to listening to their teachers. The University English Language Teachers (UELTs) assume that the students understand what they say.
Henceforth, there is a need to introduce the ESS as compulsorily as the English writing skill. The UF could enhance their speaking performance if their performance was tested against a criterion, and graded like written performance. Taking into account the most common school background of the UF (Kanwal, 2016; Zulfiqar, 2011); it became vital to teach them oral skills consciously. The present paper is based on the first author’s dissertation. This research in the form of a case study emphasized individual approach backed up by department and university to enhancing oral skills through standardized testing.
Speaking Skill and Testing
In the natural acquisition of language, a child develops oral skills long before he/she starts learning reading and writing. Natural language acquisition takes place through understanding messages without understanding each word and structure in it (Krashen & Terrell, 1995). There are people in some communities around the world, and in Pakistan, who never make transition to reading and
writing (unless necessary). They are satisfied to live in a speaking culture (Flowerdew & Miller, 2005). There are tribal languages in South America, Africa and Asia that still have no writing system. Oracy takes precedence because immediate communication takes place through oral channels (Wilkinson, 1970).
In fact, the process of learning and usage of English language requires conscious efforts (Schmidt, 1995) on the part of teachers and learners. It should be mandatory for the pupils to learn ESS for competent bearings (Rahman, 2005). ESS is a source of power and learners can realize it through formal assessment (Shohamy, 2014). Teaching speaking skills is one step, learning it in variation is another step but developing and sustaining this skill for real-life application is the top required step which can be aspired through testing. Teaching ESS without testing, and not awarding grades to the students is like denying its importance and academic standing. Some teachers find it conducive to grade class discussions and participations to incite the learners to engage in purposeful interaction (Wesley, 2013). Henceforth, the education system in Pakistan requires to incorporate ESS in English language learning to make the UF, linguistically functional.
In fact, teaching is rounded off by testing, since it is good for teachers and learners to know where they stand (Laar, 1998). Testing is interlinked with improvement (Kanwal, 2016) of the UF’s ESS. The UF’s ESS lead them to success in learning modern knowledge, professions, and higher positions (Canagarajah & Ashraf, 2013; Cheng, 2008; Hassan 2009). Thus, a systematic testing of ESS may help the learning community as well as the teaching community to observe and plan the constructs of speaking performance to focus their attention to improve. Usually, the ESS of the UF are neither tested systematically nor graded in Pakistan.
For testing ESS, scorer’s cognitive processes (Bejar, 2012) need to be consistent with constructs for measurement. Scientific scaling of oral ability is difficult (Cheng, 2008; Hughes, 2001). Language teachers must know the purpose of testing language ability to systematically assess the learning ability so that the learners can also conscientiously try to improve in specific areas of measurement. According to Educational Testing System ratings, language learners showcase higher level of proficiency in some aspects of performance than others. The assessment tasks need to be long enough to measure the speaking ability of the assessed. The language teachers are advised to create speaking tasks and tests directly corresponding with the class activities. The language learners should be provided with contextualized tasks organized around a single theme. These activities contribute to the language learners’ training to accomplish a communicative purpose in real life (Sweet et al., 2000). This study explores the application of the testing criterion and its effects on UF’s ESS in a Pakistani university.
Research Methodology
For this study, a mixed methods research paradigm is used. To achieve the said goal, the researchers applied classroom research design including the human perceptions (the UM&A, and the UELTs interviews) in the form of a case study. This case study highlighted the difference between the English-speaking skills of the UF through audio recordings within the time span of semesters 1 & 2. The quantitative tools include survey, scoring rubric, speaking performances of the UF, and the comparative evaluation of their speaking ability in two consecutive semesters. The first author engaged the university facilitators, the English language teachers and the educational administrators and managers through qualitative method (interviews) to coordinate in this research study.
To know the background of English language proficiency of the UF a survey was conducted. The survey was emailed to the students. Then, the UF emailed it to the first author as part of their lesson (Dornyei, 2007) from the language lab of the university. The administration of the survey by the teacher in the class time made the students take it intently. This survey capacitated the first author as a researcher to realize students’ situation in language learning. It aimed at describing certain characteristics of the sample for this study.
As a UELT, the first author shared Kim’s (2010) scoring rubric containing five scales of meaningfulness, grammatical competence, discourse competence, task completion, and intelligibility with the UF (See Appendix). Each of these scales have further levels of concrete references. The UFs’ speaking performances were graded on these scales. The collected performances, 292 from first semester and 562 from second semester were graded according to Kim’s rating scales. Percentages of all the performances under the five main categories with their six-point scale variations (5 for ‘excellent control’, 4 for ‘good’, 3 for ‘adequate’, 2 for ‘fair’, 1 for ‘limited’, and 0 for ‘no control’) of both semesters were compared with each other to find out the difference.
Giving every learner an opportunity to speak in English in a large class was unachievable. The first author addressed this problem by getting the speaking performances of the UF recorded. Receiving their responses as audio clips was a way out of the constraints. It was manageable within the available resources. With the guided motivation of the facilitator, the learners submitted their recorded audio clips with varying duration from 1-6 minutes for variety of task performances. In the first semester, the UF were asked to record one-minute short dialogues. The first author deemed it important to retain the commitment of the first semester students. Thus, the tasks’ duration was reduced to suit their requirement. A short dialogue was approved to boost their confidence level, as they felt more comfortable with a shorter speaking performance. Recording performances with a friend or in privacy was more convenient for the introvert students who did not prefer to share their thoughts, point of views, and ideas in a class of 40 students. For them, it was more like talking over the phone and communicating about what one finds hard to say in a face-to-face conversation.
The first author, as a UELT, motivated the learners that recordings can enable them to develop more confidence, gain clearer concepts, debate, argue, negotiate, persuade, and strengthen their literacy via oracy. Most of the students felt more self-reliant as they planned their recordings through writing the scripts before recording. The time from listening, thinking, critiquing, writing script, and then, recording provided them with some reflective time to self-correct themselves in the process. Some of the introvert students, who reluctantly participated in class activities; planned and recorded their tasks more regularly than class participation.
I interviewed the UELTs to develop an insight about learners’ linguistic cognition as they joined university. I conceived their way of teaching ESS, and the value they gave to learners’ ESS. I realized their random language testing criteria, and random language testing techniques. Teachers have first-hand knowledge of students (Sayer, 2015). Interviewing is a ‘versatile research instrument’ (Dornyei, 2007). I attained a panoramic overview of the top management and administration about the research issue, through interviews.
Study Participants
The participants of this study were 120 freshmen from Mechatronic Engineering Department, 9 English language teachers from the department of Humanities, and 11 people from the university administration and management, including the Vice Chancellor, the Senior Dean, the deans, directors and Heads of Departments. As a researcher, the first author has been an active participant herself facilitating the freshmen, working with the teaching, administering, and managing colleagues, discussing the relevant problems mutually and finding closest possible solutions to them.
Data Analysis
We analyzed the quantitative data collected through surveys and speaking performances of UFs using Microsoft excel to calculate frequencies and percentages. The qualitative data collected through interviews of language instructors and university administration were analyzed through the textual analysis. The findings of the study are discussed below.
Results and Findings
Use of
English at Personal Level
Reports
of Bachelor of Engineering for Mechatronics (BEMTS), Air University in 2013
Intake helped the first author realize that more than 89% of the students
enrolled in the program of studies were from Government Colleges, from
different corners of the country, and less than 11% students were from ‘O’, and
‘A’ Level of education. Then, through a survey, the first author gauged their
background in ESS, which made her aware of their practices of ESS at personal
level as shown in table 1.
Table
1. Frequency
of College Language Learners’ (CELLs) Practical use of English Language at Personal
Level.
S.
No |
Cell
Talk Practices |
Affirmative % |
Occasional% |
1. 1 |
talk
to friends |
4.16 |
50.83 |
2. 2 |
speak
in family get together |
3.33 |
53.33 |
3. 3 |
talk
to parents |
5.00 |
22.50 |
4. 4 |
parents
talk to learners |
5.00 |
21.66 |
Comparing CELLs’ affirmative
and occasional practices of talking in English at college level in table 1 made
the first author realize that the learners’ occasional talk in English at an
informal level demanded intentional practice of ESS. Only 3-5% language
learners practiced ESS at a personal informal level. However, the expectations
of (more than 41%) parents from the language users were very high. Sometimes,
parents use local language at home but feel that another major language should
also be spoken (Cook, 2016). The survey discovered that in spite of more than
40% students’ personal liking, English language utility for the CELLs at
personal level was demotivating to acquire it.
Teaching, Using and Testing of English
for Academic Purposes
The
practical use of English language at an academic level was higher than the
personal level. Table 2 demonstrated that teaching and testing practices of
English oral skill/ESS did not match the academic utility. For the UF’s
academic benefits, greater attention was required to enhance ESS. Whereas, 65%
language teachers used lecture method to teach language in English classes, at
college level. Less than 19% students were taught oral skills at college level.
However, more than 65% presented their projects in English in university. Sometimes,
35% language teachers tried to teach oral skills. Usually, the evaluation
criterion of oral skills was either not used or students were unaware of its
use. However, more than 60% college English language learners’ (CELLs) ESS were
not tested.
Table
2. Language
Learners’ Perspective on Teaching, Using & Testing of English oral skills
at College Level
S. No |
Using English oral skill at academic level |
% of Students |
Teaching of oral skills |
% of language teachers |
testing criteria of oral skills |
|
% of Students |
|
Testing of English oral skills |
% of testing oral skill |
1 |
present projects |
65.83 |
Taught oral skills |
18.33 |
told |
|
27.50% |
|
Tested |
10.00 |
2 |
Sometime present projects |
16.66 |
Sometime taught oral skills |
35 |
uncertain |
|
16.66% |
|
Sometime tested |
29.16 |
3 |
No using EL to present project |
15.83 |
No teaching |
46.66 |
Not told |
|
55. 83% |
|
No testing |
60.83 |
4 |
Silent |
1.68 |
Silent |
0.01 |
Silent |
|
0.01 |
|
Silent |
0.01 |
Then table 3 helped us
understand that in the case of awareness about the testing criteria, a better
percentage of the CELLs could have tried to achieve the criteria.
Table
3. College
Language learners’ awareness about the testing Criteria of English oral skills
in 2013.
% Language learners’ awareness about
the criterion of testing oral skills |
% Uncertain |
%No Testing Criteria |
% silent |
% tried to achieve criterion |
27.50 |
16.66 |
55.83 |
0.01 |
26.66 |
This
gap in the teaching and testing of ESS at college level necessitated the first
author to involve the UELTs and the UM&A in this research.
Interviews of the University English
Teachers, and Management
UELTs
are directly involved in the teaching, testing, and learning activities of a
university. Thus, the first author interviewed the UELTs to receive firsthand
knowledge about their pedagogical practices. Majority of the UELTs encountered
below average language learners at the joining time. The UM&A had similar observation. After
realizing the gaps in the CELLs’ language learning processes, the first author
gauged the UELTs’ teaching practices. Table 4 showed that five out of nine
(5/9) UELTs assessed the end semester presentations of the UF most generally.
Table 4. University English language
teachers Conscientious teaching practice of ESS & the testing constructs of
their criteria at university level, in 2013.
S. No |
UELT practices |
No.
of UELT |
1. |
Conscientious
teaching practice of ESS |
9 |
2. |
Presentations
most generally assessed |
5 |
3. |
UELT
using individual testing criteria |
9 |
4. |
%UELT Variety of constructs to
test ESS |
|
5. |
Tone,
voice, pronunciation, body language, facial expression, other things, rest of
things, everything |
2 |
6. |
relevance/
accuracy errors/correct English
|
3 |
7. |
vocabulary |
2 |
8. |
introduction |
1 |
9. |
intonation, use of phrases, inviting silent students
in group discussion, bouncing back a question, and active participation |
1 |
10. |
Fluency |
2 |
11. |
Confidence |
2 |
All the UELTs (9) used testing criteria to
gauge ESS. The UF were generally assessed on a variety of testing constructs;
almost everything. Specifying test constructs in a criterion helps the users
determine gaps in utterances. Whereas, including everything in the assessment
procedures was beyond possibility. Leaving other things while including tone,
fluency, and body language was insufficient. Retaining vocabulary,
pronunciation, and facial expression for evaluative procedures, and leaving
rest of the things to assessor’s imagination was also not justified. Thus, the
UELTs were required to scientifically balance value between the testing
constructs for assessing ESS of the UF.
To abridge this gap, the first author
used Kim’s criteria for assessing ESS of students, along with sharing portfolio
with learners through recording their audio speech. The speaking performances
in the form of recorded audios provided the teacher/rater to methodically
evaluate the speaking ability of the UF.
Change in UF
Speaking Performances
The
UF in a mixed-ability class had different levels of competence but stating that
a class (of 40 students) had no grammatical competence was a generalization
that could be avoided through a rubric by scientifically weighing the number of
learners from 0-5 responses (‘No’ to ‘limited’, ‘fair’, ‘adequate’, ‘good’, and
‘excellent’ grammatical competence). Observing a criterion, the UELT (the
participating researcher) and the UF gradually became aware of the test
constructs. The teacher designed the UF’s learning experiences, and by doing
so, she improved her own teaching practices. The UF started becoming mindful of
the meaningfulness of their own speaking performances. They started realizing
the difference in the scales of excellent, good, adequate, fair, limited, and
no competence. An analytical scoring rubric trained them to analyze differences
between major errors and minor errors. They began to find out the distinction
between a wide, a relatively wide, or somewhat narrow range of syntactic
structures.
The UF knowing that their performances
were evaluated, started self-correcting. They tried to repair and fix their
talk without teacher’s intervention. The repair in their conversation led them
to self-monitoring that took the UF a step ahead on the road to language
learning.
Table 5. Semester-1 (Sem-1) & Semester-2
(Sem-2) scale point Adequate Responses (AR), Good Response, and Excellent
Responses (ER) in percentages:
S No |
Response category |
Adequate Responses (AR)% |
Good Responses (GR)% |
Excellent Responses (ER)% |
|||
Test
constructs |
Sem-1 |
Sem-2 |
Sem-1 |
Sem-2 |
Sem-1 |
Sem-2 |
|
1 |
Meaningfulness |
more than 23 |
more than 28 |
more than 58 |
more than 48 |
more than 10 |
more than 10 |
2 |
Grammatical Competence |
more than 32 |
more than 34 |
more
than 45 |
more than 37 |
more than 5 |
more than 5 |
3 |
Discourse Competence |
more than 34 |
more than 32 |
more than 44 |
more than39 |
more than 7 |
more than 12 |
4 |
Task Completion |
More than 25 |
more than 34 |
more
than 51 |
more
than 42 |
more than 8 |
more than 5 |
5 |
Intelligibility |
more than 24 |
more than 27 |
more
than 55 |
more
than 40 |
more than 10 |
more than 17 |
Table 5 laid out that
Semester-2 performances were more satisfactory in meaningfulness, grammatical
competence, task completion, and intelligibility. In the second semester, the
speaking performances carried some elaboration (Adequate (2) Meaningfulness), and
somewhat uncomplicated ideas (Adequate (3) extension). The comparative
evaluation of the speaking performances of the two semesters concluded that the
‘use of somewhat simple or inaccurate lexical form’ (Adequate (4) Grammatical
Competence) in responses was reduced in the second semester. More performances
from second semester did not have errors. The second semester performances
displayed simple linguistic structures (Adequate (3) extension). The
performances of both the semesters were close in adequate (1) extension of
grammatical competence because the listeners could understand what the speakers
wanted to say, rarely showing considerable errors that could create ambiguous
meaning. In the first semester, more UF submitted ‘occasionally incoherent’ (Adequate
(1) extension) responses whereas in the second semester, some UF used ‘simple
cohesive devices’ (Adequate (4) Discourse Competence). The UF, in semester-1
submitted responses that showed somewhat loosely connected ideas (Adequate (3)
extension). In the second semester, the UF sufficiently transmitted their
speaking tasks (Adequate (1) Task Completion). In the second semester, the UF
grew sufficiently intelligible.
As table 5 showed that sem-1 &
sem-2 were close in excellence. Both semesters paralleled on the scales of
meaningfulness and grammatical competence. The responses of the UF in the
second semester had a wide range of grammatical structures and lexical forms
(See Appendix, excellent (2) extension). The responses had advanced syntactic
structures (under excellent (3) extension). The analysis of scale ‘Excellent’
in the testing rubric demonstrated that in spite of advanced materials,
specific terminology, and complex grammatical structures, 17% (Sem-2) verses
10% (Sem-1) qualified this level of scale. The comparative evaluation of the
speaking performances showed that more than 5% participants gained control on
excellent point-scale discourse competence. The UF’s responses were logically
structured (See Appendix, excellent (2), Discourse competence). These responses
had logical openings and closures, and logical development of ideas. The
comparative evaluation of the speaking performances on three extensions of
excellence in intelligibility finalized more than 7% speakers with an excellent
control. More than 9% UF improved in the test construct Intelligibility
Excellent (3). This improvement showed a
methodologically brought out potential in the UF.
To conclude, the UF from the second semester advanced adequately in ESS as far as meaningfulness, grammatical competence, task completion, and intelligibility were concerned.
Discussion and Conclusion
Several studies (e.g., Coleman, 2010; Mansoor, 2003, 2005; Rahman, 2002, 2005; Shamim, 2008; Tamim, 2014) have found that most Pakistani school graduates lack English language fluency while entering to university, especially speaking skills. This study found that at academic level, the UF are bound to use English. Academic pressure is one of the most powerful incentives that UELTs and UM&A can have on the learners and vice versa. Contradictorily, the speaking skills of the UF are neither tested nor graded like their writing skills. Teachers usually assess students’ speaking skills without a formal criterion. Thus, there is a dire need to assess students’ speaking skills through a criterion.
Therefore, the first author in this research introduced a criterion for measuring the speaking skills of students using different techniques, such as allowing students to record their utterances. This study developed a mechanism to measure the speaking performances of students which brought a considerable positive change in their speaking skills. The study henceforth, contribute to solving of a crucial problem of English-speaking ability of students of universities, if applied properly.
APPENDIX:
Kim’s (2010) Analytic Scoring Rubric
Analytic
Scoring Rubric |
|
Meaningfulness |
(Communication Effectiveness) Is the response
meaningful and effectively communicated? |
Grammatical Competence |
Accuracy, Complexity and Range |
Discourse Competence |
Organization and Cohesion |
Task Completion |
To what extent does the speaker complete the task? |
Intelligibility |
Pronunciation and prosodic features (intonation,
rhythm, and pacing) |
Meaningfulness
(Communication Effectiveness)
Is
the response meaningful and effectively communicated?
S |
5 Excellent |
4 Good |
3 Adequate |
2 Fair |
1 Limited |
0 No |
The response: |
The response: |
The response: |
The response: |
The response: |
The response: |
|
1 |
Is completely meaningful-what the speaker
wants to convey is completely clear and easy to understand |
is generally meaningful-in general, what the
speaker wants to convey is clear and easy to understand. |
occasionally displays obscure points; however,
main points are still conveyed. |
often displays obscure points, leaving
the listener confused. |
is generally unclear and extremely hard to
understand. |
is incomprehe-nsible. |
2 |
is fully elaborated. |
is well elaborated |
includes some elaboration. |
Includes little elaboration. |
is not well elaborated. |
contains not enough evidence to evaluate. |
3 |
delivers sophisticated ideas. |
delivers generally sophisticated ideas. |
delivers somewhat simple ideas. |
delivers simple ideas. |
delivers extremely simple, limited ideas. |
|
*(The researcher has replaced
the bulleted descriptions of six-point scales (0 for ‘no control’ to 5 for
‘excellent control’) with numbers (1, 2, and 3) for better understanding of the
criteria.)
Grammatical
Competence: Accuracy, Complexity and Range
S |
5 Excellent |
4 Good |
3 Adequate |
2 Fair |
1 Limited |
0 No |
The response: |
The response: |
The response: |
The response: |
The response: |
The response: |
|
1 |
is grammatically accurate. |
is generally grammatically accurate without any
major errors (e.g., article usage, subject/verb agreement, etc.) that |
rarely displays major errors that obscure meaning
and a few minor errors (but what the speaker wants to say can be understood). |
displays several major errors as well as frequent
minor errors, causing confusion sometimes. |
is almost always grammatically inaccurate, which
causes difficulty in understanding what the speaker wants to say. |
displays no grammatical control. |
2 |
displays a wide range of syntactic structures and
lexical form. |
displays a relatively wide range of syntactic
structures and lexical form. |
displays a somewhat narrow range of syntactic
structures; too many simple sentences. |
displays a narrow range of syntactic structures,
limited to simple sentences. |
displays lack of basic sentence structure knowledge. |
displays severely limited or no range and
sophistication of grammatical structure and lexical form. |
3 |
displays complex syntactic structures (relative
clause, embedded clause, passive voice, etc. and lexical form. |
displays relatively complex syntactic structures and
lexical form. |
displays somewhat simple syntactic structures. |
displays use of simple and inaccurate lexical form. |
displays generally basic lexical form. |
contains not enough evidence to evaluate. |
4 |
|
|
displays use of somewhat simple or inaccurate
lexical form. |
|
|
|
*(The researcher has replaced
the bulleted descriptions of six-point scales (0 for ‘no control’ to 5 for
‘excellent control’) with numbers (1, 2, and 3) for better understanding of the
criteria.)
Discourse
Competence: Organization and Coherence
S |
5 Excellent |
4 Good |
3 Adequate |
2 Fair |
1 Limited |
0 No |
The response: |
The response: |
The response: |
The response: |
The response: |
The response: |
|
1 |
is completely coherent. |
is generally coherent. |
is occasionally incoherent. |
is loosely organized, resulting in generally
disjointed discourse. |
is generally incoherent. |
is incoherent. |
2 |
is logically structured-logical openings and
closures; logical development of ideas. |
displays generally logical structure. |
Contains parts that display somewhat illogical or
unclear organization; however, as a whole, it is in general logically
structured. |
Often displays illogical or unclear organization,
causing some confusion. |
displays illogical or unclear organization, causing
great confusion. |
displays virtually non-existent organization. |
3 |
displays smooth connection and transition of ideas
by means of various cohesive devices (logical connectors, a controlling
theme, repetition of key words, etc.). |
displays good use of cohesive devices that generally
connect ideas smoothly. |
At times displays somewhat loose connection of
ideas. |
displays repetitive use of simple cohesive devices;
use of cohesive devices are not always effective. |
displays attempts to use cohesive devices, but they
are either quite mechanical or inaccurate leaving the listener confused. |
contains not enough evidence to evaluate. |
4 |
|
|
displays use of simple cohesive devices. |
|
|
|
Task Completion
To what extent does the speaker
complete the task?
|
5 Excellent |
4 Good |
3 Adequate |
2 Fair |
1 Limited |
0 No |
|
The response |
The response |
The response |
The response |
The response |
The response: |
1.
1 |
fully addresses the task |
addresses the task well. |
Adequately addresses the task. |
Insufficiently addresses the task. |
Barely addresses the task. |
Shows no understanding of the prompt. |
2. 2 |
displays completely accurate understanding of the
prompt without any misunderstood points. |
Includes no noticeably misunderstood points. |
Includes minor misunderstandings that does not
interfere with task fulfillment. |
Displays some major incomprehension/misunderstanding(s)
that interferes with addressing the task. Completion. OR |
displays major incomprehension/ misunderstanding(s)
that interferes with addressing the task. |
Contains not enough evidence to evaluate. |
3. 3 |
completely covers all main points with complete details
discussed in the prompt. |
Completely covers all main points with a good amount
of details discussed in the prompt. |
Touches upon all main points, but leaves out
details. OR |
touches upon bits and pieces of the prompts. |
. |
|
4. |
|
|
completely covers one (or two)main points with
details, but leaves the rest out |
|
|
|
Intelligibility
Pronunciation and prosodic features (intonation, rhythm, and
pacing)
|
5 Excellent |
4 Good |
3 Adequate |
2 Fair |
1 Limited |
0 No |
|
The response: |
The response: |
The response: |
The response: |
The response: |
The response: |
1. |
is completely intelligible although accent may be
there. |
may include minor difficulties with pronunciation or
intonation, but generally intelligible. |
may lack intelligibility in places impeding
communication. |
often lacks intelligibility impeding communication. |
generally lacks intelligibility. |
completely lacks intelligibility. |
2. |
is almost always clear, fluid and sustained. |
is generally clear, fluid and sustained. Pace may
vary at times. |
exhibits some difficulties with pronunciation,
intonation or pacing. |
frequently exhibits problems with pronunciation,
intonation or pacing. |
is generally unclear, choppy, fragmented or
telegraphic. |
contains not enough evidence to evaluate. |
3. |
does not require listener effort. |
does not require listener effort much. |
exhibits some fluidity. |
may not be sustained at a consistent level
throughout |
contains frequent pauses and hesitations. |
|
4. |
|
|
may require some listener efforts at times. |
may require significant listener effort at times. |
contains consistent pronunciation and intonation
problems. |
|
5. |
|
|
|
|
requires considerable listener effort. |
|
References
- Coleman, H. (2010). Teaching and Learning in Pakistan: The Role of Language in Education. Leeds: The British Council. Retrieved March 25, 2015, from
- Cook, V. (2016). Second Language Learning and Language Teaching (Fourth ed.). London NW1 3BH: Hodder Education An Hachette UK Company.Disinventing and Reconstituting Languages. (n.d.).
- Flowerdew , J., & Miller, L. (2005). Frontmatter. In J. Flowerdew , & L. Miller, Second Language Listening: Theory and Practice (pp. i-xiv). New York, NY : Cambridge University Press.
- Haidar, S. (2017). Access to English in Pakistan: inculcating prestige and leadership through instruction in elite schools. International Journal of Bilingual Education and Bilingualism, 1-16.
- Haidar, S. (2018). The role of English in developing countries: English is a passport to privilege and needed for survival in Pakistan. English Today , 1-7.
- Hassan, R. (2009). Teaching Writing to Second Language Learners. Bloomington, IN 47403, United States of America: iUNIVERSE.
- Jafri, I. H., Zai, S. Y., Arain, A. A., & Soomro, K. A. (2013). English Background as the Predictors for Students' Speaking Skills in Pakistan. Journal of Education and Practice, 4(20), 30-37.
Cite this article
-
APA : Riaz, N., Haidar, S., & Hassan, R. (2019). Developing English Speaking Skills: Enforcing Testing Criteria. Global Social Sciences Review, IV(II), 132-142. https://doi.org/10.31703/gssr.2019(IV-II).18
-
CHICAGO : Riaz, Nailah, Sham Haidar, and Riaz Hassan. 2019. "Developing English Speaking Skills: Enforcing Testing Criteria." Global Social Sciences Review, IV (II): 132-142 doi: 10.31703/gssr.2019(IV-II).18
-
HARVARD : RIAZ, N., HAIDAR, S. & HASSAN, R. 2019. Developing English Speaking Skills: Enforcing Testing Criteria. Global Social Sciences Review, IV, 132-142.
-
MHRA : Riaz, Nailah, Sham Haidar, and Riaz Hassan. 2019. "Developing English Speaking Skills: Enforcing Testing Criteria." Global Social Sciences Review, IV: 132-142
-
MLA : Riaz, Nailah, Sham Haidar, and Riaz Hassan. "Developing English Speaking Skills: Enforcing Testing Criteria." Global Social Sciences Review, IV.II (2019): 132-142 Print.
-
OXFORD : Riaz, Nailah, Haidar, Sham, and Hassan, Riaz (2019), "Developing English Speaking Skills: Enforcing Testing Criteria", Global Social Sciences Review, IV (II), 132-142
-
TURABIAN : Riaz, Nailah, Sham Haidar, and Riaz Hassan. "Developing English Speaking Skills: Enforcing Testing Criteria." Global Social Sciences Review IV, no. II (2019): 132-142. https://doi.org/10.31703/gssr.2019(IV-II).18