Skip to main content
Humanities LibreTexts

6.2: Rubrics Save Time and Make Grading Criteria Visible

  • Page ID
    60970
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Author: Anna Leahy, Chapman University

    In K–12, and in higher education, rubrics have become popular for evaluating students’ writing. The straightforwardness of a rubric—a list of criteria, of what counts, often in a checklist form—appeals to the instructor or evaluator and to administrators collecting data about students because it is a time-saving tool for measuring skills demonstrated in complex tasks, such as writing assignments. Rubrics, however, are a bad idea for writers and for those who teach writing.

    A Rube Goldberg machine is an overly engineered mechanism for a simple task. A rubric, by comparison, looks fancy and is often quantitative—it looks incredibly well engineered with its seemingly airtight checklist. In fact, it’s overly engineered to organize feedback into criteria and levels, rows and columns. Instead of responding to writing in language—with oral or written feedback— many rubrics mechanize response. At the same time, a rubric is an overly simple way to ignore that an essay is a complicated whole; it is impossible to tease its characteristics completely apart, because they are interdependent. A rubric, then, is an odd way to simultaneously overcomplicate and oversimplify the way one looks at and judges a written text.

    Let’s begin with a look at the word’s origins and uses to understand where the problem with rubrics begins. The word rubric comes from the Latin word for red. The red to which the word originally referred was the color of earth that contained the impure iron that is ocher and that was used to make ink. A rubric is, unfortunately and perhaps inadvertently, a way to focus on the impurity; it’s a red pen, with all its power to correct and to write over what has been created. If one evaluates writing by looking for what’s wrong or what needs to be corrected or fixed, one misses potential and fails to point toward improvement in the future. Moreover, the rubric’s simplicity implies that all writing can be fixed or corrected and that this correction can be done in the same way across pieces of written work and across students, instead of suggesting that revision—sometimes re-envisioning—is a more rewarding and fruitful step in becoming a better writer.

    Contemporary definitions for rubric suggest that it’s a term equivalent to a rule, especially for how to conduct a liturgical service (like stage directions printed in red), or an established tradition. In other words, rubrics work to maintain the status quo and prevent experimentation, deviation from the norm, and innovation. If you do x and y and z, the rubric says, your writing is good. But what if you do x and y and b—and discover something you’d not known before and isn’t on the rubric? The rubric does not accommodate the unexpected.

    Following the rules—the rubric—to the letter is the opposite of what good writing does. Even writers as different as Flannery O’Connor and Joan Didion have said that they don’t know what they think until they write it. So, writing is a way of thinking, of inventing one’s thoughts through language and inventing sentences that represent thoughts. But a rubric is a set of preconceived parameters—designed before seeing the products of the task at hand— that applies across the board. While a set of assignment guidelines can allow a writing task to be carried out in various ways, a rubric becomes an evaluative tool that doesn’t make sense if writing is the individual exploration that many writers experience. A rubric suggests that the task and its goals are understood before the writing itself occurs and that writing works the same way for everyone every time. Even when a rubric works adequately to evaluate or provide feedback, or even when teachers ask students to practice particular techniques or know what they’re looking for, using such a tool sends the message to students that writing fits preconceived notions. Students know that, on some level, they are writing to the rubric, instead of writing to think.

    Another contemporary definition of the word is as a heading or category. That definition suggests that using a rubric to evaluate writing is a way to label a piece of writing (and, perhaps unintentionally, label the writer as well). The more comprehensive and detailed a rubric is, the less it is able to label efficiently. Rubrics, then, cannot be all-inclusive or wide-ranging and also good at specifying and categorizing. These labeling tools do not often include the possibility of either/or that recognizes multiple ways to achieve a given goal.

    Rubrics, learning outcomes, assessment practices, and the quantitative or numerical scoring of performance emerge out of the social sciences. That’s the underlying problem for using these methods to evaluate writing and to encourage improvement. Why must social science approaches (techniques adapted from science to study human behavior) be used to evaluate work in the arts and humanities? Tools like rubrics ask those of us trained in the arts and humanities to understand the difference between direct (product-based, such as exams or papers) and indirect (perception-based, such as surveys) outcomes. Social science methodology asks teachers to see a text as data, not as language or creative production. These terms, such as data, are not ones that writers use to describe or understand their own writing and learning. Writing instructors and administrators like me, especially those who use rubrics not only for grading but for assessing entire programs, are using tools with which we are not properly trained and that were designed for other academic disciplines and data-driven research. While rubrics may be moderately helpful in assessing a program on the whole or providing program-level benchmarks, they are generally unhelpful, as currently used, in helping individual students improve their writing.

    According to Classroom Assessment Techniques, a book by Thomas A. Angelo and K. Patricia Cross, the top teaching goals for English are writing skills, think[ing] for oneself, and analytic skills. The arts, humanities, and English share think for oneself as a high-priority goal. In addition, the arts, of which writing (especially creative writing) might be considered a part, lists creativity as a top goal, and the humanities considers openness to ideas (as an extension of thinking for oneself) as a priority in teaching. These skills are all very difficult to measure, especially via a preconceived rubric, and much more difficult than goals like apply principles or terms and facts, which are among the top teaching goals for business and the sciences. In other words, the most important goals for writing teachers are among the most difficult to evaluate. The standard rubric is better suited for measuring the most important aspects of learning in other fields than in writing.

    Some rubrics attempt to be holistic, but when they begin to succeed, they are already moving away from being good rubrics for labeling or scoring. The more holistic the rubric is, the less effective it is at saving time—a feature that makes rubrics attractive in the first place. While rubrics can be used for narrative or qualitative feedback, they are unnecessary scaffolding in such cases and, worse, invite prescriptive responses. That’s what’s needed in the evaluation of writing: a move away from the prescriptive and toward the descriptive, a move away from monitoring toward formative and narrative feedback. Novelist John Irving has said that his work as an MFA student in fiction writing saved him time because his mentors told him what he was good at and to do more of that, and what he was not as good at and to do less of that. What Irving points to is formative and narrative response from expert mentors and engaged peers who revise their work and explore their options as writers.

    Formative and narrative feedback (as Mitch James writes about in a previous chapter) involves the student in analyzing and thinking about his or her own writing. Self-reflection and awareness, which are key in learning over the long haul, become part of these types of evaluation. The simple technique of posing the question “What if?” can compel a writer to try out options for writing, even when the writing task is specific in topic, audience, or length. Importantly, these types of feedback are individualized, not a one-size-fits-all response to a writing task. Feedback can be given at any time, not only when the task is complete and there’s no going back, not only at a time designated to give all students feedback. While rubrics can be employed in process, the form encourages end use in practice and discourages real-time or back-andforth exchange of information. Nearly real-time response can have an immediate effect, and you don’t need a rubric to do that. The summative response that is based on rubrics takes time, becomes linked with grading, and becomes removed from the ongoing practice of writing itself, all of which make rubrics a bad idea. Instead, using formative types of feedback that are separated from grading often propels a writer into revision as he or she attempts to strengthen the written piece and his or her own writing skills.

    Further Reading

    For more of Anna Leahy’s thoughts about assessment, see “Cookie-Cutter Monsters, One-Size Methodologies and the Humanities.” For advice on using rubrics, see “Designing and Using Rubrics” from the Gayle Morris Sweetland Center for Writing at the University of Michigan. In addition, W. James Popham provides a different take on rubrics in the journal, Educational Leadership. To learn more tips on providing formative feedback, see “10 Tips for Providing Formative Feedback.”

    Keywords

    assessment, formative feedback, revision, rubric, summative feedback

    Author Bio

    Anna Leahy has taught composition and creative writing for 25 years. She has three books out in 2017: Aperture (poetry from Shearsman Books), Generation Space (co-authored nonfiction from Stillhouse Press), and Tumor (nonfiction from Bloomsbury). She is an editor and co-author What We Talk about When We Talk about Creative Writing of and Power and Identity in the Creative Writing Classroom. She has published numerous articles and book chapters about teaching creative writing and being a professor and is working on a book about cancer and communication. She teaches in the MFA and BFA programs at Chapman University, where she also directs the Office of Undergraduate Research and Creative Activity, edits the international literary journal TAB, and curates the Tabula Poetica reading series.