Skip to main content
Humanities LibreTexts

3: Assessment

  • Page ID
    57794
    • Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, & Amanda Stansell
    • WAC Clearinghouse
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Every time we write, we assess our plans and the words we produce to see whether we can improve them. Every time we provide feedback to students, we assess what they have done and suggest what they could do better. Every time we assign grades for writing assignments, we assess. However, in the United States, large institutional and policy pressures have driven assessment and the conflicts surrounding it to a very different level.

    The establishment of remedial writing at US universities in the late nineteenth century led to assessments of writing skills of entering students to see who would be required to take such courses. The expansion of universities and increasingly democratic intake of students throughout the twentieth century made institutional assessment of writing an increasing institutional presence. Further, as state and urban systems of higher education became centralized, in the 1970s placement exams became standardized across campuses, led by system-wide exams in the California State University of then nineteen campuses, and the City University of New York of seventeen campuses. To maximize uniformity of evaluation and to limit costs, timed essays on general topics, graded through a four to six point holistic scale soon became the standard. Such tests were initially seen as an improvement on multiple choice examinations, in that students at least were required to produce extended coherent prose, although from the beginning the authenticity and validity of such writing was questioned.

    These assessments in some systems then became not only placements but graduation requirements, as did the CUNY Writing Assessment Test. At the same time, external providers such as the Educational Testing Service developed timed essay writing tests, and strong pressures emerged to tailor writing instruction towards passing these high-stakes tests. Eventually in 2006 the ETS and the College Board were to introduce a writing component in the SAT college entrance exams. Writing educationists, however, over the years increasingly advocated for writing portfolios as more authentic and more supportive of good pedagogy, with a few systems moving in that direction despite the increased costs in time and human resources.

    State and federal policies for accountability in secondary and primary schools then brought these timed examinations to the public primary and secondary education system, along with examinations in reading and math. At first such assessments were carried out only through selected samples aimed at evaluating school districts and states, as through the National Assessment of Educational progress initiated in 1969 and adding a writing exam in 1984. With the stricter standards for individual student accountability at the state level throughout the 1990s and accountability at the school level brought on by the No Child Left Behind Legislation, these examinations became increasingly endemic and with higher stakes, even though NCLB required only reading and mathematics examinations. With such large numbers of students taking such exams, it became increasingly attractive to external providers both to administer the exams and to provide educational support to assist students. Again the pedagogical consequences of the increasing reliance on these exams was highly controversial, with many seeing them as destructive of authentic, motivated writing that develops through an extended process within a meaningful situation in dialog with other writers and in engagement with information and subject area learning

    The development of digital writing assessment technologies brings the last piece to the controversies. While such technologies provide cost efficiencies for both large institutional testing and providing feedback for extended student practice, the lack of authentic situation, the effect of standardization of task and criteria, and the lack of meaning-making in the assessment have made such technologies highly controversial. Nonetheless, advocates argue that these technologies have a place within writing education at all levels from elementary through higher education.

    In this section, we provide a cross-section of the current research addressing these controversies, providing different directions for the future of writing assessment at all levels, both from institutional and pedagogic perspectives. Deane et al. present the results of initial testing of new automated assessment tools built within a larger model of writing instruction and assessment. Klobucar et al. present the results of a collaboration between ETS and one university to integrate automated assessment into a wider suite of educational practices. Perelman provides a critiqe of the limitations of these technologies in providing meaningful assessment and feedback. O’Niell et al. analyze the political context of the assessment practices and technologies. Swain et al. and Lines provide alternative models for developing assessments.

    —CB