Taking Another Look at Student Perceptions of Automated Writing Evaluators (AWEs): The Case of a Sino-British Joint Venture

Introduction

My last contribution to this blog (here) summarized recent research that examined how students responded to one automated writing evaluation (AWE), Grammarly. I’ve decided to look at AWEs to help inform my work as a member of the graduate faculty at NYU Shanghai. At my school, we recently provided all of our graduate students (~15) with premium access to Grammarly free-of-charge. We are looking for ways to scaffold students’ acquisition of professional literacy and to support student success. We’ve chosen this route, in part, because we’ve just begun to add graduate programs to NYU Shanghai and we don’t have the same academic support structures in place for graduates as we do for the undergraduates.

In this post, I’m going to look at another article that has examined how students’ view AWEs. This one focused on L2 writers in the East Asian context, specifically at Xi’an Jiaotong Liverpool University (XJTLU). The article I’m going to look at today extends the work discussed in my previous post by zeroing in on the Chinese context. It also expands on Cavaleri and Dianati’s (2016) work by providing more concrete recommendations that may be useful to L2 writing specialists should they choose to introduce their students to AWEs. Two of Reis and Huijser’s (2016) most salient takeaways are the following:

1.) Many commercially available AWEs are insufficient in their support of multilingual writers that are working in multilingual contexts. That is, they don’t sufficiently speak to the unique rhetorical and linguistic toolkits and needs of this population.

2.) Usability, or how successfully a typical user can successfully use a tool (Krug, 2014), is a critical concern when choosing, or designing, an AWE for student use.

Reis and Huijser provide some very useful points as we consider the possible role of AWEs in supporting L2 writers.

Summary

Reis and Huijser (2016) utilized a multi-pronged approach to answer two questions. First, how do students perceive AWEs; and, second, how can we, as educators, address these concerns so that the students are more likely to use a tool that may help them to improve their writing. Their data collection methods included web-delivered surveys and in-person focus groups targeting both students and academic support staff. Through their research they uncovered six major themes related to students’ views of AWEs.

First, they found that students reported issues with the structure of the feedback provided by the AWE. a proprietary one called Marking Mate. Particularly, students reported that the font size of the explanatory call outs was too small, and the emoji-based feedback system was too difficult to understand. Second, the students’ stated that they had problems with how the software handled formal vs. informal writing feedback. Marking Mate would often flag informal language in academic essays, but it would not provide students with viable alternatives. Third, they found that students had numerous usability-related issues that made the tool difficult to incorporate into their writing. This included fonts that were too small to easily read, color-coding schemes that were not designed with color blind/deficient individuals in mind, etc. Reis & Huijser’s (2016) fourth and fifth themes related more explicitly to the kind of feedback that the tool gave. They found that students were annoyed when the program would mark as repetitive core content words, thereby lowering the formative assessment score that the software would give the students’ writing. The fifth theme was connected to the tool’s inability to offer feedback on the proper formatting of references and in-text citations. Their sixth theme spoke to a common problem that teachers that choose to deploy AWEs face—that students see it as nothing more than a checking tool.

Reis and Huijser (2016) provide some actionable suggestions for how to redress some of the issues that their research uncovered. However, the most salient one is that we must train our students in how to use the tool to learn about better writing and revision practices. We must teach them to be critical of the tool and to engage with its feedback in a critical manner. That is, we must help students to see the AWE as a support and not as a silver bullet. It’s not just going to fix things for them.

Closing Comments

In closing, I would like to take a moment to expand on a point that seems tertiary to Reis and Huijser, despite the fact that it was recurrent across many of their themes—namely, the issue of usability. If a person cannot easily use the tool for its intended purposes, they will quit using it. Think of every poorly designed website you’ve ever visited. I’m sure that you quickly tried to find an alternative site if you simply couldn’t find what you were looking for, or if you had to click 27 links just to get the log-in page. Usability is a central concern in developing digital tools. When I was the coordinator at the Purdue Online Writing Lab (OWL), we would regularly invite our users in for usability testing so that we could see where we would need to focus our redesign efforts. In judging whether or not to recommend an AWE to students, I would encourage you to begin by asking yourself, “How usable is this tool? Does it infuriate me?”

References

Cavaleri, M., & Dianati, S. (2016). You want me to check your grammar again? The usefulness of an online grammar checker as perceived by students. Journal of Academic Language & Learning, 10(1), 223-236.

Krug, S. (2014). Don’t make me think, revisited: A common sense approach to web usability (3rd ed.). San Francisco, CA: New Riders.

*Reis, C., & Huijser, H. (2016). Correcting tool or learning tool? Student perceptions of an online essay writing support tool at Xi’an Jiaotong-Liverpool University. Show Me the Learning (pp. 529-533), Adelaide, AU: ASCILITE.

Joshua M. Paiz on Twitter
Joshua M. Paiz
Lecturer at NYU Shanghai
Joshua M. Paiz holds a Ph.D. in TESOL from Purdue University and is currently a lecturer and L2 writing specialist at NYU Shanghai. His research interests include L2 writing, SLA, identity in applied linguistics, and critical issues in TESOL.

1 thought on “Taking Another Look at Student Perceptions of Automated Writing Evaluators (AWEs): The Case of a Sino-British Joint Venture”

  1. hi Joshua
    apparently there is a line of scholarship which critiques 3rd party tools such as AWEs from an “academic literacies” point of view; there is a suggestive abstract here of a recent talk on this [http://d1qmdf3vop2l07.cloudfront.net/azure-shark1.cloudvent.net/compressed/a01594af9c78f2084e16041b22d77009.pdf]

    sample quote: “we argue that these products are part of an emerging, curated learning environment characterized by genericness, decontextualisation, individuation, and fragmentation. We suggest that a socially-oriented narrative which might provide coherence and meaning to a program of learning is beginning to fade from view, and a new kind of hidden curriculum is emerging: one in which the student is increasingly expected to take responsibility for assessing, designing and managing their own learning journey.”

    and an interesting slide from talk:
    [https://twitter.com/cioccas/status/926169860740018176]
    ta
    mura

Leave a Reply