Editor

JLLT edited by Thomas Tinnefeld
Showing posts with label 81 Herbach. Show all posts
Showing posts with label 81 Herbach. Show all posts

Journal of Linguistics and Language Teaching

Volume 13 (2022) Issue 2


Supporting Japanese Student’s Confidence and English Fluency through Formative Assessment


James Herbach & Kinsella Valies (Shizuoka, Japan)


Abstract

This classroom-based project included the development of clear and concise instructions, an analytic rubric as well as practice and testing materials to assess students’ speaking abilities. The speaking tasks consisted of three parts: reading aloud, describing a picture, and giving an opinion. As an extension of an ongoing departmental project originally focusing on second-year pharmacy students, the researchers have incorporated eleven first-year student groups from all departments.  Six of the groups were offered the module face-to-face, while the other five received instruction, practice tasks, and feedback online.

The aim of the current project was to improve confidence and production of logical arguments by implementing a formative speaking assessment module that provides students with opportunities for advancement through succinct instruction, insight into strengths and weaknesses, self-evaluation and instructor feedback.  Research questions included: 1) To what extent did the use of the assessment tool improve first-year students’ comprehensibility in English? 2) To what extent did classroom instruction help improve student confidence?

This study took a mixed-method approach where data from student grades and surveys were collected and analysed. To obtain a baseline of students’ speaking abilities prior to any formal instruction and gain insight into the assessment process, one class was chosen by each instructor and asked to record themselves describing a picture. A comparison of the baseline assessment and finals resulted in improvement following the introduction of rubrics and explicit instruction. The remaining two aspects also showed a distinct rise both in grades and comprehensibility between midterms and finals. Survey results showed that across departments, students’ confidence in their abilities to express themselves in English increased over the semester. Future research will focus on redesigning and evaluating the speaking assessment rubrics both for first and second years.  

Keywords: Formative assessment, speaking assessment, student confidence, English fluency, classroom-based project



1   Introduction 

This research project emerged from previous studies involving formative speaking assessments of second-year university students. The aim of the current project was to improve students' confidence and production of logical arguments by implementing these assessments to provide first-year students with opportunities for speaking advancements and feedback. One of the motivations for this project was to continue to support students’ speaking skills in their English language courses. Moreover, the task of creating clear and concise materials which any instructor in any department could easily adapt and implement into their classes in support of any textbooks which were being used was also of paramount importance. In response to this mandate the decision was made to create a tri-sectional, multi-modal formative assessment which could be used to grade students’ speaking abilities. In addition, the project was designed to accommodate all first-year students from all five of the departments at the university.  These include the Department of Pharmacy, the Department of International Relations, the Department of Information and Management, the Department of Food and Nutritional Sciences, and the Department of Nursing.


In reference to tri-sectionality, Xiao and Yang (2019) have stated that English language teachers should give students appropriate guidance, which includes instructing them on learning strategies, activating their metacognition, and enhancing their self-efficacy.  In addition, the assessment including all the supplemental teaching materials was multi-modal in design as it was created to be implemented in either a face-to-face classroom setting, or in an online instructional platform. In support of this endeavour, all materials created were designed and standardized to be used as A4 sized printed paper handouts or as Word documents, PDF documents, and video files which could be shared with students electronically.  Simultaneously, the assessment was produced to be formative in an endeavour to help students to practice and to improve their learning outcomes throughout the semester. Students were assessed regularly using speaking tasks with two main graded assessments. In this way, both instructors and learners could identify points for improvement in a timely manner. To promote this, clear and concise instructions were created and distributed to students multi-modally to establish a logical step-by-step process which students were able to use in both practising and completing their speaking assessments while enhancing the breadth of their learning strategies. Moreover, instructing students on the use of logical arguments and providing opportunities for them to share their opinions, supported with empirical evidence, was key in activating their metacognition while encouraging their self-efficacy.


This project is ongoing and first began during the fall semester of 2019 with a group of second-year pharmacy students. The impetus of the original project was driven by feedback collected from students asking for more time to practise their speaking skills in class. Feedback was derived from conventional student course evaluations which are administered at the end of each semester. The results of student course evaluations are used to aid in informing instructors as to the efficacy of their teaching practices and providing input with regard to possible course revisions and improvements.


During the spring semester of 2019, a series of workshops was initiated. The goal of the workshops was to provide students with an opportunity to discuss a variety of contemporary issues in small groups outside of regular classes. Students were given a platform for sharing feedback as to their experiences of the workshops at the end of each session. Comments collected from student feedback of their workshop experiences mirrored that of their course evaluations in reference to their desire for more opportunities to use their English-speaking skills in their academic enterprises. As a result of the positive responses of the original project, through both the improvement in students’ grades and the data collected from student surveys in addition to both student course evaluations and workshop feedback, the decision was made to continue the project. 


2   Literature Review

As one of the four main skills required in language learning, speaking is widely regarded as an important facet in second language acquisition (Ounis 2017: 1). Implementing speaking tests to assess student performance at the tertiary level in ESL classrooms is not new, although it is not as common in educational environments as the use of reading / writing comprehension assessments. Issues related to the implementation of speaking assessments include time constraints of teachers to create and administer the tests and an uncertainty as to the level of efficacy of the assessments. Moreover, supporting students in their speaking development through an improvement in assessment results is of critical concern to instructors. Current research has shown that students’ willingness to communicate (WTC) is supported by an improvement on speaking assessments (Kanzaki 2016: 496). Similarly, the researchers of this current study were focused on the question of the efficacy of how speaking assessment could support students’ knowledge and skills development. Ongoing research in the field of speaking assessments in an ESL environment would suggest that supporting students’ confidence levels is correlative to their improvement in assessment results. The findings of this study align with the idea that increasing results lead to higher WTC as the student survey responses show a 17.7% increase in student confidence levels over the duration of the project.


When considering the implementation of a speaking assessment into an ESL course of study, electing an appropriate mode for testing is paramount. Research in this field suggests that when surveyed, most participants had no strong predilection for either face-to-face or person-to-machine modes of testing (Qian 2009: 1), though a contributing factor in the support of students’ speaking skills improvements is testing environment anxiety (Tercan & Dikilitis 2015: 1). The findings of the current study suggest that a person-to-machine mode of testing without an instructor present is a mitigating factor in reducing participants’ testing anxiety.


In classroom assessment, it is common and desirable to report to test takers by using concrete and descriptive feedback to allow for improvement in task performances. This usually takes the shape of formative assessment throughout a semester by which students can see where they have difficulties and take targeted action to progress. Additionally, instructors can identify problems and adjust their lessons to go over these areas again and offer more practice. A speaking test can include tasks that are information related and require learners to discuss or explain something while selecting relevant content, presenting it in a clear sequence fluently and accurately. In the current project to be further detailed in the following segments, a formative classroom assessment tool has been created and tested that centers on information speaking tasks.


Language learners being assessed on their speaking ability require consistency in scores (reliability) and usefulness in identifying ability to use structured language items effectively (validity). Ensuring reliable, valid and fair speaking assessment requires procedures that minimize the influence of bias in human raters on results. According to Luoma (2008), developing these procedures or aspects of a test is a “cyclical” process which centers on test development and administration stages: “The rating is an interaction between raters, the criteria and the performances (…) to produce the scores (Luoma 2008: 170). Grove & Brown (2001) and Luoma (2008) state that a focus on the performance tasks in the development stage should be abandoned in favor of a coinciding/concurrent-criteria and task-creation process. To ensure validity assessors / instructors can also reflect on the consequences of the chosen tasks, scoring model or tool. Luoma (2008: 186) suggests identifying this washback effect by studying both test-takers' attitudes and experiences with the test and scoring tool as well as the effect on teaching and teachers. The speaking assessment tool and the rating system presented in this paper have gone through this cyclical process and have seen a number of iterations so far. This study looks at the washback effect by implementing surveys and comparing assessment results at various stages. 


3   Methodology

The speaking assessment was based on pre-existing nationally implemented language proficiency tests such as the TOEIC and the EIKEN tests. Such standardized speaking tests are comprehensive and can cover up to five different skill categories requiring individual participants to spend up to 20 minutes to compete the assessment (ETS 2010). Due to the time requirements for testing while maintaining an adequate number of hours for classroom instructions within a 15-week semester, only three sections of the speaking tests were chosen. The decision was made to limit the assessment to three sections so as to maintain an adequate amount of time for classroom instruction, including practice and test preparation. The three designated sections of the speaking test included reading a text aloud, describing a picture and giving an opinion.  As with the uniform tests, this speaking assessment was designed to be a timed test.  All testing materials were standardized in a format that included instructions in both English and Japanese. 


As can be seen in the example below, the assessment materials have been standardized, and templates have been created so that any instructor can easily insert a piece of text matching the level of their students. The assessment materials format includes the explanation of which part of the text this document refers to (Part 1: Read a Text Aloud) and the instructions for what the student should do, all in both English and Japanese. As the original structure of the speaking test was based on the ETS TOEIC test, the English language task instructions and timing for all three tasks are very similar to those used in the TOEIC speaking test (ETS 2021).  As this is a timed test, students are instructed to read the text quietly for vocabulary, pronunciation, and punctuation together with ascertaining which key words and phrases to stress. This is followed by the text materials which the students will be reading aloud and graded on.  For the purposes of this project, it was agreed that the text should be approximately eight lines long and should not include any proper nouns. Proper nouns can be problematic for pronunciation,  as examples are found coming from many languages other than English. 


4   Materials Production  

The speaking assessment was based on pre-existing nationally implemented language proficiency tests such as the TOEIC and the EIKEN tests. Such standardized speaking tests are comprehensive and can cover up to five different skill categories requiring individual participants to spend up to 20 minutes to compete the assessment (ETS 2010). Due to the time requirements for testing while maintaining an adequate number of hours for classroom instructions within a 15-week semester, only three sections of the speaking tests were chosen. The decision was made to limit the assessment to three sections so as to maintain an adequate amount of time for classroom instruction, including practice and test preparation. The three designated sections of the speaking test included reading a text aloud, describing a picture and giving an opinion.  As with the uniform tests, this speaking assessment was designed to be a timed test.  All testing materials were standardized in a format that included instructions in both English and Japanese. 


As can be seen in the example below, the assessment materials have been standardized, and templates have been created so that any instructor can easily insert a piece of text matching the level of their students. The assessment materials format includes the explanation of which part of the text this document refers to (Part 1: Read a Text Aloud) and the instructions for what the student should do, all in both English and Japanese. As the original structure of the speaking test was based on the ETS TOEIC test, the English language task instructions and timing for all three tasks are very similar to those used in the TOEIC speaking test (ETS 2021).  As this is a timed test, students are instructed to read the text quietly for vocabulary, pronunciation, and punctuation together with ascertaining which key words and phrases to stress. This is followed by the text materials which the students will be reading aloud and graded on.  For the purposes of this project, it was agreed that the text should be approximately eight lines long and should not include any proper nouns. Proper nouns can be problematic for pronunciation,  as examples are found coming from many languages other than English:

Figure 1: Example Assessment Materials: Read a Text Aloud

As can be seen from the sample above, here again, a bilingual, standardized, instructional text introduces the instructor-selected picture for this second speaking task.  Students have 30 seconds to analyse the image and identify the most important elements that will help them logically describe it.  Subsequently, the following 45 seconds are meant to allow students to show that they can smoothly describe an image using the active vocabulary they already possess so that the listener can perceive the same image through their description. The pictures are taken from royalty-free internet sources, such as Pixels (pexels.com). The selection criteria for images are four-fold. There should be various easily identifiable actions being taken by persons in the image. In addition, the background should be in focus and include objects or persons for further description. The location and time of day should be easy to comprehend. Furthermore, clothing should be colourful and varied enough to warrant description. All instructors contributed to the search for both the practice collection and the assessment collection. The image in Figure 2 was sourced from Unsplash.com: 

Figure 2: Example Assessment Materials: Describe a Picture

Figure 3: Example Assessment Materials: Express an Opinion

The sample above shows an example of the last section of the assessment (Figure 3). The standardization of the materials follows a logical step-by-step process including the part of the test, the title and the testing instructions which are all written in both English and Japanese. At the bottom of the sample is the question which students are required to read quietly and think about for 30 seconds, before they are asked to respond by giving their opinions to the questions and speak for 60 seconds. Their responses are also required to be followed by their reasons for their opinions and to be supported by examples. It was also agreed upon that the statements provided to students would follow a standardized phrase which began with Do you think that…?.  This format was also designed so that any instructor could easily enter a question onto the template which they felt would adequately match the level of their students’ skills in understanding and their abilities in responding to. 

In-class instructions as well as all supplementary course materials were implemented with a focus on a clear and concise explanation of the testing environment. One of the motivations of the project being to design a speaking assessment, including course instructions and supplementary materials which could easily and effectively be used by any instructor across departments. The materials were also intended to be used in support of any textbook which may be required by any of the five departments.  For this project, the World Link series of textbooks was used for all eleven participant groups, and text passages, photos and questions from the textbooks were used for practice and test preparation. Students were given opportunities in groups to consider, discuss and provide answers for all three parts of the test during classes. For students in the online class, Zoom break-out rooms were used to provide students with the space for group work. They were also given copies of an analytic rubric through which both peer and instructor feedback was contributed,  allowing for a formative learning process throughout the semester.

The researchers opted for an analytic rubric over the holistic approach because 

holistic rubrics are not suited for formative assessment as they do not provide specific feedback on areas that need improvements in students’ oral productions. (Ounis 2017)

Our rubric in all its incarnations consisted of numbered bands/columns ranging from one to five points per production task for a total maximum grade of fifteen points. The original bilingual rubric was not sufficiently descriptive and,  on the one hand, made it difficult for instructors to objectively grade student fluency, while on the other hand, students had difficulty grasping what was expected of them. A comparison of the Describe a Picture row in both versions below provides a more detailed look at the evaluative elements used to both guide student production and evaluate them:  

Table, calendar

Description automatically generated

Table 1: English Rubric - Version 1: Describe a Picture

The focus of the first rubric was on providing important details, logical order, the use of adjectives, verbs and prepositions, and overall grammatical correctness. Though the required details were explained and practised in class, they were not itemized or described in the rubric itself: 

One point would be awarded if the student:

Describes unimportant details without logical order and/or makes 7+grammar and/or vocabulary mistakes.

Five points would be awarded if the student:

Describes 5+ aspects in logical order and includes adjectives/verbs/prepositions and mentions locations and persons 

The range from one to five included a value-reliant term, i.e. unimportant, while the types of grammar mistakes were not specified. Logical order was also insufficiently explicated. To assess what has been taught in class and grade students fairly, these aspects needed to be corrected.

The revised version of the rubric was more descriptive and gave students a chance to see for themselves what aspects they had to work on when practising individually:  

Table 2: English Rubric Version 2: Describe a Picture – 5-Point Band

The maximum-points block now clearly states what aspects need to be described, such as locations, people, actions, while reminding students that grammar, vocabulary and diction needed to be correct. Students would focus on the maximum-points block and practice at home as well as in class where they would additionally be supported by peer- and instructor evaluation using the same rubric. In addition to the aforementioned formative aspect, the use of the rubric requires self-sufficiency and the use of learning strategies introduced in class. These findings are in line with the self-regulation studies by Boekaerts & Corno (2005) and Zimmermann. The latter suggests that

self-regulated learners are able to generate feedback, interpret self-generated and externally mediated feedback and use feedback to achieve their own learning goals. (Zimmerman 2002, as quoted by Xiao & Yang 2019: 41)

The rubric remains a work in progress and is currently in its fourth incarnation which is even more concise and can be more effectively used on its own as a study-at-home tool for second-year students at the university. 

In conjunction with in-class instructions, supplementary course materials were provided to students in either printed paper copies or as Word documents. In an endeavour to support students' ongoing speaking skills development, these materials were designed to offer them a logically constructed format which was clear and easy to understand. In an attempt to provide students with a variety of choices in developing positive learning strategies, opportunities were provided in class for practice including both peer and instructor feedback. Through this process, students were guided to pursue a logical step-by-step process (for example, instructions for describing a picture, which included the place, setting and their actions) to effectively complete the speaking assessments:  

Figure 4: Example Instructions: Giving your Opinion

Through instructor discussions following the mid-term speaking test, it was agreed that additional instruction materials were necessary in focussing students’ improvements in two specific areas on the assessment.  Remedial materials were then created to support students on the use of the simple present tense of the verb to be when describing a picture, in addition to directing them to concentrate on providing two reasons and two examples when giving their opinions:  

Figure 5: Example Remedial instructions: Describe a Picture

For this project, all participants were instructed to record a video of themselves completing the speaking assessments. Testing materials and instructions, including video guidelines and submission requirements,  were all provided to students through the university’s Learning Management System (LMS). Students in face-to-face classes were given an email address for their video submissions, whereas students in the online class used Flipgrid (https://info.flip.com/). The materials were made accessible to students for a 24-hour period on the day of their class. As Tercan insists:  

speaking skills should be taught in socially non-threatening settings to allow for greater learner performance (2015: 10) 

our study presupposes that assessment should also be conducted in a non-threatening environment. As such, students were also advised that they did not have to attend class on the day of the assessment so as to provide them with the time and space to record their speaking test videos in an undistracted environment.  

The grading system for all eleven participant groups was standardized to provide a uniform instructional experience for all students. The breakdown of grades was as follows:  

  • mid-term speaking test: 20%

  • final speaking test: 20%

  • final written exam: 20%

  • autonomously implemented assessments: 40%

In an attempt to assure an ease of implementation for the assessment by any instructor while providing teacher autonomy, 40% of the overall course grading was allotted for individual instructors to use however they deemed most effective.

The testing schedule was also standardized for all participant groups, with the mid-term test falling on Week 8 of the semester and the final speaking test falling on Week 14. All details of the course grading and schedule were given to students in the first week of classes as either a printed paper handout or as a Word document. Instructors also continued to remind students of the information as they practiced and prepared them for the assessments throughout the semester. In addition, students were provided with formative feedback from the mid-term assessment, including their scores for each of the three parts of the speaking test and their overall grades. 


5   Project Results

The data collected for the speaking assessments including the baseline, mid-term and final tests, resulted in improvement across the sequence of exams. This was supported by the introduction and use of the analytic rubric as well as both explicit and remedial instructions provided to students throughout the semester. In response to the research question to what extent the assessment tool improved first-year students’ comprehensibility in English, the average grade differential between the mid-term and the final video submissions was +2.4%. There was, however, an anomaly in the online class as there was a decrease in test results between the baseline and mid-term of -8 or .4 of a possible 5 points, and a decrease between the baseline and the final exam of -4 or .2 of a possible 5 points. 

In contrast, in reference to the research question to what extent classroom instruction helped improve student confidence, survey results showed that in all classes across departments, students’ confidence in their abilities to express themselves in English increased over the semester. In the pre-course survey students (n=116) were asked “How would you rate your English-speaking ability now?":

Chart, pie chart

Description automatically generated

Figure 6: Pre-course English Speaking Ability

While somewhat weak increased to 68.3 percent, a few students felt confident enough to say that their English-speaking ability was strong. The number of learners who thought themselves weak decreased to 18.6 %: 

Figure 7: Results Average: Describe a Picture

A baseline of students’ speaking abilities upon entering the first-year classes was established by requiring them to describe a picture before any formal instructions were provided in class. An example of one of the standard formatted materials (Part 2: Describe a Picture) was made available for students using the university’s LMS.  Students were simply asked to view the material, follow the directions on the example and then time themselves while recording a video of their descriptions.  This non-graded assignment was scheduled as homework during the first week of classes. 

The results of students’ baseline assessments were then used to evaluate their skills on both their mid-term and final speaking tests.  For the face-to-face classes, the mid-term test results showed an improvement over the baseline,  and the final test results showed an improvement over both the baseline and the mid-term speaking tests. For the online classes however, there was an anomaly: the average test scores for these students showed a reduction of .4 out of a total of 5 points for the mid-term in comparison to the baseline and a reduction of .2 out of a total of 5 points for the final in comparison to the baseline. One possible factor for this drop in the average test results scores could have been students’ confidence in their abilities to describe a picture leading them to focus their test preparations on Part III, which many students felt was the most challenging aspect of the speaking test:  

Figure 8: Pre-course Survey: Student Confidence

Qualitative data was collected for this project using student surveys, including a pre-course and a post-course questionnaire which was distributed to all subject groups using the university’s LMS. With reference to all three sections of the speaking assessment for both face-to-face and online subject groups for the pre-course survey, I can’t do it responses ranged from 5.2% to 13.1%. 5.2% was in reference to face-to-face students responding to being able to successfully read a piece of text aloud. 13.1 % was in reference to online students responding to being able to successfully describe a picture.  This range in survey response scores represents the differences in both the fact-to-face and online subject groups confidence levels across the assessment tasks. 

With reference to the statement, I can do it with practice, students’ responses ranged from 66.4 % to 82.4%. The former percentage was in reference to face-to-face students responding to being able to successfully read a text aloud, whereas 82.4% was in reference to online students responding to being able to successfully give their own opinions. These numbers supported one of the project motivations which was the belief that clear and concise instructions in combination with an analytic rubric would provide students with the necessary foundation to improve their confidence in their speaking skills:  

Figure 9: Post-course Survey: Student Confidence

Post-course survey results of the speaking assessment for both face-to-face and online subject groups in reference to responses to the statement, I can’t do it,  exhibit a decrease to a range from 4% to 8%. 8% represented the highest rating and was in response to the online subject group’s lack of confidence in being able to successfully give their opinions. Nevertheless, these results reveal that the post-course response to this statement dropped by 2.1% in reference to the group’s response for the pre-course survey questionnaire.

For the statement, I can do it with practice, the subjects' responses for both groups ranged from 62.9% to 89.3%. The 89.3% level of confidence was in reference to the face-to-face groups’ reaction to their perceived ability in being able to successfully describe a picture.  For this group, the responses from pre-course to post-course revealed an increase of 17.7%.  The overall increase in students’ perceived levels of confidence in their speaking abilities and production of logical arguments supported one of the projects’ research questions,  i.e. to what extent classroom instruction helped students to improve their confidence levels. 


6   Discussion 

One aspect of the classroom learning environment which was revealed was that students’ willingness to communicate with others was strongly positive, influenced by a sense of being observed. This suggests that pair or group work offered a positive learning experience for language learners, which could provide them with both practice and peer-feedback in support of developing confidence in their abilities. This was supported by instructor observations and feedback both in the classroom and by instructor comments through the feedback of the mid-term speaking test results.

During the original incarnation of the speaking test project, prior to the emergence of the coronavirus, students were assessed individually and face-to-face by an instructor in their office. For the purposes of objectivity, test groups were assessed by a different teacher than their classroom instructor. Subsequently, due to the spread of the coronavirus and the university’s directive to hold all classes online, the format of the speaking test needed to be revised. This resulted in asking participants to record a video of themselves completing the speaking test, and making the videos available to their instructors. Qian proposed that:

if a test taker’s state of mind or disposition is affected by the testing mode in some negative way, the affective filter may also be up to interfere with his or her test performance. (Qian 2009: 5)

As a result, the project has also brought to light how asynchronous testing allowed students to defer the position of instantaneous teacher evaluations. This result indicates that the ability to work alone during the testing process provided students with the confidence to complete the speaking assessments effectively. Consequently, the decision was made to continue requiring students to record themselves for the speaking tests to support their confidence levels in addition to mitigating the laborious and time-consuming nature of the face-to-face testing evaluation process for instructors. 

In addition, 47.2% of the participants said that English would be important for their future careers, and 71.9% stated that they would recommend this type of class to their friends. 

On a more personal note, the researchers think that it is important to include student voices (in their own words) gathered post-course regarding what they liked about the module and what they learned from collaborating with and evaluating peers:

One thing I liked about having more speaking practice in class was.....

“I could talk with my friends using English.” (Shinji)

“I had a lot of opportunities to speak English.” (Mari)


One thing I learned from my classmates was...

“I learned a way to express my opinion clearly.” (Toki)

“Importance of pronunciation, and facial expression.” (Saki)

Before the project had started, comments showed that our language learners did not have enough speaking opportunities in class. The above comments support the continuation of our project, since students seemed satisfied and / or saw the benefits of doing a tri-sectional, multi-modal speaking practice /assessment.


7   Conclusions

The semester increase in student comprehensibility was strongly influenced by the concrete instruction of the speaking purpose and criteria. Most importantly the realization that came from receiving lower scores pre-guidance informed students’ more positive attitudes towards the usefulness of criteria instruction. Peer support, pre- and post-assessment feedback, and the use of deferred evaluation through recorded videos had a positive impact on subject confidence. The increase of non-instructor-centered practice allowed for more in-class speaking time while in the testing environment. Through the adjustment of this aspect of instruction, the students’ confidence in their abilities and comfort-level in speaking increased. This suggests that an increase in confidence could be directly related to the increase in scores.


8   Limitations and Final Remarks

One of the challenges in any instructional environment is the pressure and anxiety experienced by students during the assessment process. As previously stated, providing students with an asynchronous testing setting can alleviate some of the individuals’ examination anxiety. Moreover, the acquisition and use of technology by students remains an ongoing challenge for instructors either in a face-to-face classroom or an online platform. Managing the amount of time instructing students on the use of technology in contrast to the amount of time for instruction of course materials remains a difficult balancing act. 

Through post-course discussions, instructors agreed that the rubric used for this project could be further developed. Although it had previously been revised and was in its third incarnation for this project, there remained the impression that both the scoring bands and the clarity of the definitions along the scale could be improved upon. Providing students with a rubric which can target their abilities more easily and more precisely and can support them in their self-assessment abilities is presently in progress. 



References 

Educational Testing Service (2010): TOEIC user guide: Speaking and writing. Princeton, NJ. (http://www.ets.org/s/toeic/pdf/toeic_sw_score_user_guide.pdf; 18-10-2022).

Educational Testing Service (2021): Sample Tests. Princeton, NJ. Retrieved from https://www.ets.org/s/toeic/pdf/speaking-writing-sample-tests.pdf; 18-10-2022).

Farmer, R. & E. Sweeney (1997): Are you speaking comfortably? Case studies of Improving teaching and learning from the action learning project. In: English Language 1997, 293-304.

Grove, E. & A. Brown (2001): Tasks and criteria in a test of oral communication skills for first-year health science students: where from? In: Melbourne Papers in Language Testing, 10 (1), 37-47.

Kanzaki, M. (2016): TOEIC Speaking test and willingness to communicate. In: Clements, P., A. Krause & H. Brown (Eds.): Focus on the learner. Tokyo: JALT, 491-496. 

Luoma, S. (2008): Assessing speaking. Cambridge University Press.

Ounis, Asma (2017): The Assessment of Speaking Skills at the Tertiary Level. In: Canadian Center of Science and Education (http://doi.org/10.5539/ijel.v7n4p95; 15-01-2019).

Qian, David, D. (2009): Comparing Direct and Semi-direct Modes for Speaking Assessment: Affective Effects on Test Takers. In: Language Assessment Quarterly (https://doi.org/10.1080/15434300902800059; 15-01-2019).  

Tercan, G. & K. Dikilitaş (2015): EFL students’ speaking anxiety: a case from tertiary level students. In: ELT Research Journal 4 (1), 16-27.

Xiao, Y., & M. Yang (2019): Formative assessment and self-regulated learning: How formative assessment supports students’ self-regulation in English language learning. In: System 81, 39-49. (https://doi.org/10.1016/j.system.2019.01.004; 18-10-2022).



Authors:

James Steven Herbach

MAED, Lecturer
Kwansei Gakuin University
Hyogo, Japan
Email: james_herbach@yahoo.com

Kinsella Valies

MAAL, Assistant Professor
Jissen Women's University
Tokyo, Japan
Email: kicvalies1@gmail.com