Skip to main content
Humanities LibreTexts

3.5: Prominent Feature Analysis- Linking Assessment and Instruction

  • Page ID
    57803
    • Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, & Amanda Stansell
    • WAC Clearinghouse
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Sherry S. Swain, Richard L. Graves, David T. Morse, and Kimberly J. Patterson

    National Writing Project, Auburn University, and Mississippi State University

    Prominent feature analysis grew out of our study of 464 papers from a statewide writing assessment of seventh graders (Swain, Graves, & Morse, 2011). The original purpose of the study was to identify the characteristics of student writing at the four scoring points of the assessment (1–4, with 4 as highest), hoping that such information would assist teachers in linking their writing instruction to writing assessment.

    We began by assembling a team of exemplary English language arts teachers, all with advanced certifications or degrees. The plan was to bring expert eyes to the papers, asking, “What stands out here? What is prominent?” We hypothesized that identifying the prominent features in papers at each scoring level could guide instruction. As part of the training, we read common papers and discussed what constitutes prominence at the seventh grade. Though we had no predetermined rubrics or guidelines, relying instead on the educated wisdom of team members, we sought to make our terminology as standard as possible; for example, all metaphoric language was classified as metaphor rather than simile, personification, or metaphor. We needed to achieve consistency while maintaining a keen professional insight into student writing (Swain, et al., 2011).

    Prior to the analysis, team members discussed features that required clarification: cumulative sentences and final free modifiers, voice, and certain intersentential connections, among others. The cumulative sentence and final free modifiers were first described by Francis Christensen (1963), who asserted that the form of the sentence itself led writers to generate ideas. The sentence form has been examined by Faigley (1979, 1980) and Swain, Graves & Morse (2010) for its impact on writing quality. Voice has been presented as socially and culturally embedded in both the writer and the reader by Sperling (1995, 1998), Sperling and Freedman (2001), and Cazden (1993). Elbow (1994), Palacas (1989) and others have offered theories about voice. The present study defines voice in terms of its correlation with other more concrete features rather than in a formal statement. Flawed sentences were characterized by Krishna (1975) as having a “weak structural core.” Features that touch on larger aspects of writing, organization, paragraph structure, coherence and cohesion, have been described by Christensen (1965), Becker (1965), Witte and Faigley (1981), and Corbett (1991).

    In the analysis, 32 prominent features, 22 positive and 10 negative, were identified and are shown in Appendix A. All 464 pieces of writing were read twice for accuracy and consistency and reviewed by the authors. To establish the level of classification consistency, we examined the individual score sheets for each of the 464 papers, determining how many changes were made from the initial analysis through the final reading. There were 484 changes assigned to the entire set of 464 papers across the multiple readings. There was a possibility of 14,848 changes, considering that there were 32 features, and that each of the features originally assigned to each paper could have been deleted and each feature not assigned could have been added. The percentage of agreement in this case is 97%. The judgments of presence or absence of prominent features are therefore considered to be both highly consistent across independent readers and to have yielded credible data for the analyses.

    Correlational analyses were conducted to examine the relationship between the prominent features and the statewide assessment scores; however, this task proved problematic. The state score distribution was severely restricted, tending to attenuate the correlations between these scores and the prominent features. For example, of the 464 papers, only 7 students scored “1,” the lowest score, and 28 scored “4,” the highest score. Thus roughly 91% of the students scored at level 3 or level 2. There was no definitive way to ascertain to what extent the unexplained variance in state writing scores may be a function of (a) restriction of range of assigned scores; (b) unreliability of assessment scores; (c) other systematic aspects (e.g., scorer effect); or (d) some combination of these factors (Swain, et al., 2011).

    After the analysis, the authors continued to look deeply into the student writing and the prominent features. We observed that all the features were either positive or negative; there were no neutral features. From this, we hypothesized the presence of a still point between the positive and the negative, to which we attributed the value “0.” Then in each paper we gave a value of +1 for each positive feature and –1 for each negative feature. We summed the values of the features in each paper, resulting in what we called the Prominent Feature Score.

    In order to express all scores in positive numbers, we reset the value of the still point from “0” to 10, thus giving each paper an additional 10 points. This resulted in an observed range of scores from 3 to 21, shown in Figure 1.

    Interestingly, the mean score of the 464 papers is 10.3, which corresponds to the still point of 10.

    Prominent feature analysis provided the kind of information we were seeking originally, the characteristics of seventh grade writing along a continuum of quality. Clearly the prominent feature score discriminates more powerfully among the 464 pieces of writing than does the state holistic score. Important here is that the prominent feature score is derived from specific characteristics of student writing, whereas the state score is merely assigned, using external criteria. Behind each prominent feature score exists a list of the features from which the score is derived, providing the vital link between the assessment of writing and instruction of writing. The study yielded much more than we had anticipated, a rich lode of information about seventh grade writing as well as a method of analysis and scoring that may prove useful in a range of educational contexts.

    Figure \(\PageIndex{1}\): Percent … Scorepoint

    Applying Prominent Feature Analysis

    The opportunity to apply prominent feature analysis in a school presented itself when the principal of Pineville Elementary School (fictitious name) contracted with a local National Writing Project site1 to conduct a yearlong inservice program for her faculty (Swain, Graves, & Morse, 2007). Though the school, nestled in a rural area about 20 miles north of the Gulf Coast, was considered high performing, student writing scores were low, and the teachers there had not participated in professional development focused on writing. Twenty-six faculty members served the 450 students in grades 3, 4, and 5, who were primarily Caucasian, with slightly over half participating in the free- and reduced-price lunch program.

    The Pineville Project

    The project involved two teams. A professional development team led workshops, conducted classroom demonstrations, and modeled response to student writing. A research team coordinated a quasi-experimental study that included pre and post assessments for Pineville School and a comparison school, classroom practice data, and prominent feature analysis of student writing.

    Students wrote to one of two counterbalanced informative prompts under controlled conditions in the fall and again in the spring. This time a research team of five exemplary English language arts teachers performed the prominent feature analysis of student writing. The fall analysis revealed the strengths and weaknesses of the young Pineville writers and served as a needs assessment to inform content for the professional development program.

    Prominent Feature Analysis of Pineville Student Writing

    For the prominent feature analysis, team members noted the prominent features of each paper, relying on their professional expertise to distinguish and identify prominent features and calling on other members for clarification. The process included partnered analysis during the early stages, with consensus for papers considered difficult. Preparation and training for the prominent feature analysis cycled through four decision-making processes:

    1. Reading from sets of common papers, team members came to consensus on the features observed in each paper. For example, some team members questioned whether a cumulative sentence should also be classified as striking sentence. The decision in such cases was to note every applicable category of prominence. Thus an initial list of features and definitions emerged.
    2. Noting newly observed features required periodic pauses for determining whether features should be added to the list or whether the definition of a previously identified feature should be broadened to include it.
    3. Distinguishing between the ordinary and prominent for elementary writing also fueled discussions. For example, “white as snow” would not rise to the level of prominence in a high school paper and would not be noted as metaphor at that level. However, in a paper written by a third grader, “white as snow” was considered prominent.
    4. Second readings for consistency led to discussions with the first reader and principal investigators. Fifteen percent of the papers were randomly selected for second readings. As in the seventh grade study, the degree of consistency proved to be high.

    The complete list of prominent features for the Pineville study turned out to be very similar to that of the seventh grade study.

    Immediately following the fall prominent feature analysis, the research team discussed overall impressions of the papers. What do we notice about this set of papers? What are the strengths of these young writers? In what areas should their teachers focus instruction? The group suggested prominent feature content and teaching strategies for the professional development program. The prominent features were to be introduced as content, using strategies for teaching in the context of student writing rather than in isolation. All this was shared with the professional development team and incorporated into the program described below. In this way, prominent features first influenced the needs assessment, then influenced the program, and then made their way into the Pineville classrooms as part of the writing curriculum.

    The Proffesional Development Program

    The professional development team, working with the school principal, then designed a program to include both content topics and teaching strategies, among others. Content topics included the following: dialogue; cumulative sentences; adverbial leads; precise nouns; vivid verbs; elaborated detail; voice; and organization, including lead sentences and unifying conclusions. Teaching strategies included the following: student choice, reading-writing connections, idea generation and prewriting, mini-lessons, modeling, analysis of first draft writing, teacher/student conferences, revision strategies, editing, publishing, and student/teacher reflection.

    Each teacher participated in 34 hours of professional development, including workshops and demonstration lessons, plus between-session support. In each setting, prominent features were introduced as stylistic or rhetorical elements along with strategies appropriate for teaching them in context rather than in isolation. Table 1 summarizes the on-site program components.

    In addition to the activities that took place at the school site, staff developers provided continuing support in two forms. First, they wrote detailed plans from the demonstration lessons and encouraged teachers to adapt these for their classrooms. Plans included suggestions for whole class, small group, and individual instruction, guidelines for moving through the process of the lesson, and a rationale for each lesson.

    Second, because the teachers needed models for responding to the sometimes intricate aspects of student writing, the professional development team modeled appropriate response to student writing, as shown in Appendix B. Staff developers asked that students work on a single piece over time, taking that piece through multiple drafts. The drafts were sent to writing project staff, who then wrote a response to each student, thus providing a scaffold to support the teachers as they learned to give feedback.

    Table \(\PageIndex{1}\): On-site staff development activities

    a Each classroom hosted a classroom demonstration at least one time so that all students had the opportunity to be “taught” for one class by staff developer and a small group of teachers from other classrooms.

    b Classroom demonstrations occurred on 12 separate days, three to four demonstrations per day.

    c Each individual teacher attended six demonstration sessions.

    Implementation of Program Strategies

    One of the chief indicators of the success of a professional development program is the extent to which classroom teachers incorporate program strategies into their practice. Toward the end of the school year, 11 Pineville teachers participated in an extensive interview process to determine which, if any, of the program strategies they had regularly incorporated into their classrooms: (1) student choice, (2) reading-writing connections, (3) prewriting, (4) peer response, (5) teacher/student conferences, (6) mini-lessons on specific rhetorical strategies, (7) revision strategies, (8) editing, (9) publishing, and (10) modeling. An implementation of strategies score was generated for each teacher as follows: 2 points for full implementation, 1 point for partial implementation, and 0 for no implementation. The possible range of scores was 0–20; the observed range of scores was 6–19, with a mean score of 12.7. The use of strategies by Pineville teachers was judged to be very good.

    The research team also evaluated the 11 interviews using the 4-point scale of A Descriptive Continuum of Teaching Practice (Graves & Swain, 2004). Level 4 of the continuum describes a completely process-oriented, student-centered practice; Level 3, a partially process-oriented practice; Level 2, a partially traditional, skills-focused practice; and Level 1, a completely traditional and skillsfocused practice. Of the 11 teachers interviewed in Pineville, two were rated at Level 4; five at level 3; three at Level 2; and one at Level 1. Following only one year of professional development, these results were considered very good.

    The following excerpts from the interviews reveal some of the ways teachers applied strategies to teach prominent features in their classrooms. One teacher described a strategy for making reading-writing connections to teach the value of dialogue:

    What they had written wasn’t in dialogue form. After reading a text rich with dialogue, I asked, “How could you use dialogue in your paper”? They changed to dialogue.

    Another described using the model lesson on the prominent feature of cumulative sentences to describe a favorite place. In this lesson, she also used the strategies of student choice and modeling.

    I told the children we were going to write magic sentences using doing words. We gathered in a circle, and I started by modeling for them. I asked them to think about a favorite place… I told them mine was the beach… . We went around probably four times. I told them to think of something they might be doing at the beach. I gave some examples: “watching children bury themselves in the sand.” My assistant modeled as well. We ran out of time. The next day I modeled what another student had said and made a sentence on the board. I did the whole lesson they gave us.

    Yet another described her use of a peer response strategy to focus on the features of description and vivid verbs.

    When they wrote their papers, they skipped lines to make it easier to revise. After they wrote their rough drafts, they got into small groups, four or five in a group, and read to each other. After they read in their groups and got ideas, they went back over their papers, and tried to add descriptive words and vivid verbs.

    The detailed accounts in the interviews confirmed that Pineville teachers were using the strategies to teach prominent features in the context of student writing.

    Pineville Student Writing Performance, Holistic and Analytics

    Following the yearlong program at Pineville School, the fall and spring writing assessments from Pineville and the comparison school were scored independently at a National Writing Project scoring conference (National Writing Project, 2010). Papers were scored analytically and holistically, yielding a total of seven scores per occasion, each on a scale from 1 to 6. The analytic scores included content, structure, stance, sentence fluency, diction, and conventions. An independent summary judgment yielded a holistic seventh score (Swain & LeMahieu, in press). Table 2 shows that over the course of the year, Pineville students, though scoring slightly lower than the comparison students in the fall, showed remarkable gains, both in overall holistic growth and in each of the analytic attributes. Third grade comparison students did improve, though not nearly to the degree that Pineville third graders did.

    Table \(\PageIndex{2}\): Summary statistics for scores by group

    Note: Mean values are given; values in parentheses are standard deviations. N = 435 for program, 217 for comparison group.

    Table 3 summarizes the results of a repeated-measures ANOVA of the pre and post writing assessments for program and comparison groups for each attribute of writing as well as for the holistic assessment.

    Pineville students showed statistically significant improvement in the overall set of scores (and on each individual score) from pre to post writing assessments in relation to the comparison students’ scores, which were essentially unchanged and were statistically indistinguishable across occasions.

    For each set of scores, there was a significant difference at the .001 level for occasion, interaction, and six of the seven measures for group. The other measure of significance for group was p = .008 for conventions. There was also a significant difference in Pineville students’ own scores between pre and post assessments. The significant difference in the interaction between the occasion (pre or post) and the group (program or comparison) indicates that the difference is due to group. Table 3 indicates that the significant differences in all areas of writing that were assessed were due to the program. The main effect of group comparisons and the group-by-occasion interactions are essentially telling the same story here—that the difference between groups is principally due to the fact that only the Pineville students showed a change in performance, improving from pre to post assessment, whereas the comparison students showed no consistent change. In brief, growth in all areas of writing was significantly higher for the Pineville group between the pre and post writing assessments, and significantly higher than that of the comparison group.

    Table \(\PageIndex{3}\): Repeated-measures ANOVA results for all matched cases on holistic and analytic scores

    These results confirmed our hypothesis that prominent feature analysis could be a valid link between assessment and instruction, but our original question still remained: What features or characteristics of student writing are linked most closely with higher (and perhaps, lower) scoring papers? Since NWP’s Analytic Writing Continuum Assessment System provides scores, on a six-point scale, for six attributes of writing plus a holistic score, correlations between these scores and prominent features could now be ascertained. (Swain & LeMahieu, in press). A summary of the patterns that emerged from the study follows.

    First, statistically significant correlations were observed between 24 of the 33 individual prominent features and the seven scores (holistic and six analytic). There were some exceptions to this. Chief among these was the tendency for correlations of prominent features to be slightly lower with conventions scores than with any other of the analytic scoring categories. It is important to note, however, that such differences were not statistically tested.

    Second, prominent feature elements considered to be positive attributes in an essay (e.g., balance/parallelism, voice) generally yielded positive correlations with the analytic and holistic scores, whereas negative prominent feature elements (e.g., weak structural core, poor spelling, unfocused) generally had negative or essentially zero correlations with the scores.

    Third, the prominent features that showed the stronger relationships—used here in a relative sense, as none of the correlations observed was moderate or large—with the analytic and holistic scores were: (a) elaborated details, (b) dialogue, (c) sentence variety, (d) effective ending, (e) well-organized, (f) supporting details, and (g) voice. The overall prominent feature scores correlated in the .40s with the holistic score. Clearly, these prominent features (mostly positive) do appear as valid contributors to the scoring judgments on both the analytic and holistic measurements (Swain et al., 2007).

    Conclusion

    Some years back our research focused on ways to help teachers make instructional sense of a state writing assessment. In many ways prominent feature analysis accomplishes this, providing the means for both assessment and instruction. Now, though we cannot claim prominent feature analysis as the single cause for the growth in writing of the Pineville students, we suggest that the interaction between the prominent features and the teaching strategies (along with the cooperation and goodwill of the teachers) was paramount. We now understand prominent feature analysis as a valid link between assessment and instruction. The evidence for this understanding is three-fold:

    The Pineville study demonstrates the validity of prominent feature analysis as a needs-assessment tool that is grounded in student writing ability.

    Results of the Pineville study confirm that students whose teachers participated in professional development that focused on prominent features significantly outperformed students whose teachers did not participate.

    Correlations between prominent features and the AWC assessment validate the link between prominent features and the quality of writing.

    As mentioned earlier, between prominent feature analysis and other kinds of writing assessments lies a crucial distinction. Prominent feature analysis derives numerical values from specific rhetorical features whereas other forms of assessment assign numerical values to student writing based on externally described characteristics. The major task of prominent feature analysis is to determine whether or not a specific rhetorical concept has risen to the level of prominence. The major task of other kinds of writing assessment is to determine whether a piece of writing is a B- or a C+, for example, or a 3 or a 4. While holistic assessments provide comparative data across large sets of papers, and analytic scoring provides comparative data that describes quality in the various attributes of writing, prominent feature analysis adds another dimension to the assessment of writing, one that is grounded in writing itself and that brings into play the possibilities for well-informed writing instruction.

    Prominent feature analysis is new, and it is only natural that questions should arise about its efficacy. Already we are exploring how the list of features might be refined, especially the prominent features of genre or content. Further lines of inquiry include the developmental aspects of prominent features, the possibility of ranking features, and a deepening understanding of the interrelationships among features. It seems clear that prominent feature analysis has a vital role to play in the universe of writing assessment.

    Note

    1. The National Writing Project is a network of over 200 university-based sites dedicated to improving writing and teaching in the nation’s schools.

    References

    Becker A. L. (1965). A tagmemic approach to paragraph analysis. College Composition and Communication, 16, 237–242.

    Cazden, C. B. (1993). Vygotsky, Hymes, and Bakhtin. In E. A. Forman, N. Minick, & C. A. Stone (Eds.), Contexts for learning: Sociocultural dynamics in children’s development (pp. 197–212). New York: Oxford University Press.

    Christensen, F. (1963). A generative rhetoric of the sentence. College Composition and Communication, 14, 155–161.

    Christensen, F. (1965). A generative rhetoric of the paragraph. College Composition and Communication, 16, 144–156.

    Corbett, E. J. (1991). Classical rhetoric for the modern student (3rd ed.). New York: Oxford University Press.

    Elbow, P. (1994). Writing first! Educational Leadership, 62(2), 8–13.

    Faigley, L. (1979). The influence of generative rhetoric on the syntactic maturity and writing effectiveness of college freshmen. Research in the Teaching of English, 13, 197–206.

    Faigley, L. (1980). Names in search of a concept: Maturity, fluency, complexity, and growth in written syntax. College Composition and Communication, 31, 291–300.

    Graves, R. L., & Swain, S. S., (2004). A descriptive continuum of teaching practice (Unpublished manuscript). Mississippi Writing/Thinking Institute. Mississippi State University, Mississippi State, MS.

    Graves, R. L., Swain, S. S., & Morse. D. T. (2011). The final free modifier—once more. Journal of Teaching Writing, 26(1), 85–105.

    Krishna, V. (1975). The syntax of error. The Journal of Basic Writing, 1, 43–49.

    National Writing Project (2010). Writing project professional development continues to yield gains in student writing achievement (Research Brief, 2. 2010). Retrieved from http://www.nwp.org/cs/public/print/resource/3208

    Palacas, A. L. (1989). Parentheticals and personal voice. Written Communication, 6(4), 506–527.

    Sperling, M. (1995). Uncovering the role of role in writing and learning to write: One day in an inner-city classroom. Written Communication, 12(1), 93–133.

    Sperling, M. (1998). Teachers as readers of student writing. In N. Nelson & R. Calfee (Eds.), The reading-writing connection: Yearbook of the National Society
    for the Study of Education
    (pp. 131–152). Chicago: University of Chicago Press.

    Sperling, M., & Freedman, S. W. (2001). Research on writing. In V. Richardson (Ed.), Handbook of Research on Teaching (4th ed., pp. 370–389). Washington, DC: American Educational Research Association.

    Swain, S., Graves, R. L., & Morse, D. T. (2007). Effects of NWP teaching strategies on elementary students’ writing (Study). National Writing Project. Retrieved from http://www.nwp.org/cs/public/print/resource/2784

    Swain, S., Graves, R. L., & Morse, D. T. (2010). Prominent feature analysis: What it means for the classroom. English Journal, 99(4), 84–89.

    Swain, S., Graves, R. L., & Morse, D. T. (2011). A prominent feature analysis of seventh-grade writing. Journal of Writing Assessment, 4(1). Retrieved from http://www.journalofwritingassessment.org

    Swain, S. S., & LeMahieu, P. (in press). Assessment in a culture of inquiry: The story of the National Writing Project’s analytic writing continuum. In L. Perelman & N. Elliott (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White. Cresskill, NJ: Hampton Press.

    Witte, J. A., & Faigley, L. (1981). Coherence, cohesion, and writing quality. College Composition and Communication, 32, 189–204.

    Appendixes

    Appendix A. Positive and Negative Prominent Features from the Seventh Grade Study

    Appendix B. Student Draft and Model Response from Professional Development Consultant

    I remember the time me and my dad went fishing. We had caught 5 bass 3 brim and 12 grinals. My dad cast his line and I cast mine. We both hook something. I brought in a bass and my dad finally brought in an alligator. I got so scared if I didn’t see him stay in the water I probably would have jumped off the boat. Later on that day we go back to that spot after the water goes down some and we find some alligator eggs. I got one and broke it. Then we see the mama coming back. Me and my dad turn the boat around and leave. That is the story about my encounter with a mama alligator.

    Dear Adventurous Fisherman,

    You really had an exciting day. I can’t imagine seeing an alligator close up like you did. Where did you go fishing? Was it a lake or river?

    You really built suspense with these sentences:

    My dad cast his line and I cast mine. We both hook something.

    I was really wondering what it would be.

    I want to know more about your dad’s hard work trying to reel in that alligator.

    I loved your sentence that told me how scared you were. It gave your story voice—made it fun to real out loud and made me feel like I know you a little better.

    When you saw that mama alligator coming back to her eggs did you say anything? Did your dad say anything? When you said you and your dad turned the boat around and left, I thought you were going to say something about how fast you got out of there. Can you think of a way to make your reader feel some excitement about getting away from that alligator?