Volume7
Title: Usability of CEFR Companion Volume scales for the development of an analytic rating scale for academic integrated writing assessment
Page: 155-177
Author(s): Claudia Harsch (University of Bremen), Valeriia Koval (University of Bremen), Ximena Delgado-Osorio (DIPF, Leibniz Institute for Research and Information in Education), Johannes Hartig (DIPF, Leibniz Institute for Research and Information in Education)
DOI: https://doi.org/10.37546/JALTSIG.CEFR6-9 DOI is not yet activated, it will take a few days to get confirmation from the central database.
This article is open access and licensed under an Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.
Abstract:
Successful academic writing from sources requires a broad range of competencies. When writing from sources, students are expected to mine source texts for relevant ideas, present these ideas with precision and in necessary depth, have efficient paraphrasing skills and the knowledge of proper source attribution. In order to assess the combination of these skills in writing and to provide diagnostic feedback to the learners, there is a need to design a rating scale where the required skills are operationalized in separate criteria (Knoch 2011). However, this endeavour may be challenging due to the complex nature of the academic integrated writing construct.
This article describes the process of analytic rating scale development in the context of German higher education (HE). We address the issues of construct complexity and the operationalization of the construct elements in rating scale criteria by a combination of theory-based, descriptor-based, empirical, and intuitive approaches to scale development (e.g., Chan, Chihiro and Taylor 2015; Kuiken and Vedder 2021), with a particular focus on the usability of relevant scales from the CEFR Companion Volume (CEFR/CV; Council of Europe 2020). Besides the CEFR scales, we also explore the usability of existing scales for integrated writing and relevant taxonomies (e.g., Keck 2006; Shi 2004). Finally, we present qualitative insights of intuitive expert judgement from a workshop with four content experts who trialled and refined the first draft of the rating scale. The ensuing validation of the rating scale is, however, beyond the scope of this paper and the mixed-methods validation study will be reported elsewhere.
The rating scale development reported here was part of the DFG-funded research project Modelling of academic integrated linguistic competencies, conducted at the University of Bremen and the Leibniz Institute for Research and Information in Education in Frankfurt. The project aim was to evaluate the academic-linguistic preparedness of students taking up English-medium studies in Germany by employing authentic integrated writing tasks and valid assessment procedures. The article offers insights into challenges and critical considerations when developing CEFR-based rating scales for integrated writing, focusing on valid rating criteria, bands, and adapting existing descriptors.
Keywords: rating scale development, CEFR/CV, integrated writing tasks, academic preparedness, validation of a rating scale
* * * * * *