Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How do raters understand rubrics for assessing L2 interactional engagement?: A comparative study of CA- and non-CA-formulated performance descriptors
Karlstad University, Faculty of Arts and Social Sciences (starting 2013), Department of Language, Literature and Intercultural Studies (from 2013). Karlstad Univ, Karlstad, Sweden..ORCID iD: 0000-0002-7286-1577
Kobe University, JPN.
2020 (English)In: Papers in Language Testing and Assessment: An international journal of the Association for Language Testing and Assessment of Australia and New Zealand, E-ISSN 2201-0009, Vol. 9, no 1, p. 128-163Article in journal (Refereed) Published
Abstract [en]

While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of video-recorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters' interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.

Place, publisher, year, edition, pages
ALTAANZ-ASSOC LANGUAGE TESTING & ASSESSMENT AUSTRALIA , 2020. Vol. 9, no 1, p. 128-163
Keywords [en]
engagement, conversation analysis, paired discussion tests, interactional competence, English as a foreign language (EFL)
National Category
Languages and Literature
Research subject
Comparative Literature
Identifiers
URN: urn:nbn:se:kau:diva-82446ISI: 000593061500006OAI: oai:DiVA.org:kau-82446DiVA, id: diva2:1516747
Available from: 2021-01-12 Created: 2021-01-12 Last updated: 2026-02-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Sandlund, Erica

Search in DiVA

By author/editor
Sandlund, Erica
By organisation
Department of Language, Literature and Intercultural Studies (from 2013)
In the same journal
Papers in Language Testing and Assessment: An international journal of the Association for Language Testing and Assessment of Australia and New Zealand
Languages and Literature

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 190 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf