|

WPA Panel: Revising Rubrics: Rebuilding
Assessment in Creative and Rhetorically Effective Ways"
Carol Samson
A group of DU Writing Program Lecturers,
Richard Colby, Rebekah Shultz Colby, David Daniels, and Blake
Sanz, presented a panel at the July WPA conference which examined
the topic of Rebuilding Assessment in Creative and Rhetorically
Effective Ways. Their papers covered a range of perspectives, exploring
the evolution of the DU Writing Programs concept of assessment, the
rebuilding and structuring of student-driven rubrics, the effectiveness
of using creative writing-styled workshop methods and, even, the
possibility of rethinking revising itself, that is, a consideration of
the revising of revising.
To begin, Richard Colby situated the panel topic, arguing the
import of programs developing, rather than finding, their own means of
assessment. In his paper, Colby offers a brief review of the DU
Programs evolution: the development of course goals during Fall Quarter
2006 and the weave of the Lecturers' backgrounds:
"The lecturers in the program, as in many programs, have degrees from a
variety of disciplines and institutions, although they are all
experienced writing teachers. I mention this because we developed the
course goals qualities and approaches to writing that we all felt were
most important without being immediately constrained by how we were
going to assess them."
He cites Brian Huot and Michael Williamson: If assessment procedures
are developed from specific curricular goals, then the assessment will
tend to influence teachers and students towards mastering those goals.
If, however, the assessment is based upon only those goals that are
easily measured, then curriculum will be limited to its assessment
procedures. Colby, then, points to the decision of the DU Program to
follow a procedure that would influence teachers and students to move
toward goals and a decision to incorporate a reflective essay at the end
of the first year. The hope was that, by submitting portfolios of four
pieces of writing, students could demonstrate that they understood the
course goals and could articulate how they met each goal. In the end,
the portfolios would be used for program development research. To
justify the flexibility and the use of a reflective essay, Colby points
to an article by Edward White, The Scoring of Writing Portfolios: Phase
2, which points out the merits of a reflective essay portfolio: The
reflective letter the student prepares after the portfolio has been
compiled becomes the overt argument, using the portfolio content as
evidence, that the goals have been met, at least in part. If the
evidence does not demonstrate that the goals have been met, the
reflective letter can discuss why.
Colby notes that the first year assessment led to much disagreement,
discussion and compromise. The sample of 110 portfolios were read and
assessed by Lecturers, norming sessions were held to review course
goals, and the portfolio meetings generated discussions that lead to a
revised portfolio structure for the second year. The first year
assessment revealed a 68% Satisfactory or Above score for the
understanding of course goals and a 72% score for students ability to
demonstrate those goals. The problem was that some instructors spent
more time on the reflective essay, others reviewed course goals, and
still others employed reflective essays throughout the term. According
to Colby, the first year attempt was overly complicated and problematic
for its cherry-picking approach to the course goals. In the revising of
the assessment tool, the committee focused on three primary topics:
ability to write in two academic research traditions; understanding of
rhetorical difference between writing for academic audiences and writing
for popular audiences; and proficiency in finding, evaluating,
synthesizing, critiquing, and documenting published sources. Colby
argues that this revised edition focuses the assessment on the
students ability to understand and provide evidence for the goal,
rather than privileging better reflective essays or evidence essays.
This better allows instructors to have their own assessments and
reflective essays in their classes. Colby concludes that by reiterating
that assessment that addresses the understanding of the course goals
provides more opportunities for instructors to explore varieties of
teaching and assessment in their individual classes.
Linking to the background Richard Colby offered on program assessment,
Rebekah Shultz Colby focused her argument on the use of rubrics
that inform good writing in classroom contexts. The problem with
rubrics in general, Shultz Colby suggests, is that they become a static
set of criteria that only measures one type of generic writing one
type of writing that is usually traditionally essayistic and only
written within English departments, or worse yet, a type of writing that
is only written for assessment purposes within English departments.
Even portfolios, which contain a multitude of writing strategies
addressed to a variety of audiences, are usually assessed using a
generic rubric which, again, creates a generic, ahretorical type of
writing. Put simply, rubrics and ratings systems force fit
performances. They generalize or encourage synthetic representations of
rhetorical performance when, according to Shultz-Colby, because good
writing varies so much depending on the context of audience, purpose,
and genre, to evaluate each piece of writing well, each piece really
needs its own individualized rubric to be able to accurately assess how
well it actually accomplishes its own unique rhetorical aims.
Seeking to locate writing assessment within writing practice in order to
validate rubric, rather than merely to seek inter-reliability of
findings, Shultz Colby offers a classroom methodology worth quoting in
detail as it attempts to line up with, as she argues, our postmodern,
social constructionist, rhetorical values in its flexibility:
"In my class, I have developed a type of assessment that is similar to
Pamela Moss assessment based on a discussion of writing values and
Robert Broads dynamic context mapping in which assessors
ethnographically transcribe and then code the writing values under
discussion during group assessment and then use that coding as a type of
rubric for further assessment. In my class, for each different genre
that students are assigned to write, for instance, a letter to the
editor. . ., they also are assigned to bring to class an example of that
genre that they believe is written in a way that is rhetorically
effective for its audience. Then, as a class, we discuss the written
features that define rhetorically effective pieces within that genre by
discussing specifically how each genre feature is effective for its
intended audience and purpose and why. Then we generate a list of the
rhetorically effective features within that genre. Of course, the
deciding criteria are always audience and purpose which means that, even
within the same genre, some genre features are effective form some
audiences but ineffective for others. . . .[ Students, then,] become
aware of the writing features and constraints of the genre [even as they
see] quite a bit of flexibility and difference within that genre. . .
.This list of genre criteria, then, becomes the writing criteria in
their rubric for assessment. It is this criteria [that] they use to
assess each others work during peer review, and it is also the criteria
I use to evaluate their papers. . . .There are . . . few surprises when
it is time for me to grade. Students know, even before they start
writing their papers, what the criteria for evaluation are going to be.
This also helps to make them more rhetorically aware writers who are
conscious of the choices that they make within their own writing.
Shultz-Colby concludes with the thought that by co-constructing their
own rubrics, students help us to solve the dilemma of assessment
de-contextualization. The student-driven rubrics do locate assessment
within writing needs and practices. Students become more reflective
about their writing choices, and they become more skilled at reading
genre and knowing its methodologies. In the end, says Shultz-Colby, the
practice of co-constructing rubrics helps students become more flexible
writers because they have some agency in the assessment process.
Looking at the panel topic from an alternative point of view, fiction
writer/rhet-comp instructor Blake Sanz, who grew up in New
Orleans, entitled his paper, An Assessment Named Desire: Beyond
Practical Notions of What Makes Good Student Writing. In the paper,
Sanz attempts to sort out the ways in which the desire to write can be
implanted in the composition classroom. While, as he notes, students in
a creative writing class may be working with a brand of writing they
want to do, sometimes composition students feel they are required to
complete an indoctrination process that separates the need to write from
the desire to write. Sanz, then, attempts to design assignments for the
composition classroom that, while incorporating the skills necessary for
rhet/comp work, also promote the desire to write:
"[My assignments] include the following final line: 'Whatever you decide
to focus on, be more intent on thinking for yourself, and less intent on
following whatever you imagine is the formula for success.' I
reiterate this over and over in class aloud, and then later, as we
workshop early drafts. I point out examples in which. . .a student
clearly demonstrates how she has taken on the challenge of using writing
to further her understanding of the papers content. The intended effect
is to get the rest of the class to see the possibility that, while
writing essays might not be a trip to the bar, it also might be
something more than a requirement."
In Sanzs classroom, the workshop discussions point to specifics in the
sample papers. Students look for the clear sense of sincere questioning
on the part of the writer and even a considered deviation from the
assignment prompt or a joke placed at an opportune time that provides
a direction for thought. After two days of workshopping the drafts,
students come to understand these goals. They unpack the language of the
prompt and begin to read the student drafts with an eye for the writers
intent and strategy. Sanz hopes they see that, in part, they are being
graded on how much they seem to have reached a point of thought that has
caused them to want to explore the topic at hand.
Sanz suggests that in setting up a workshop that encourages the desire
to write, instructors may borrow from the creative writing workshop
model and may use a variety of techniques borrowed from fiction writing.
Sanz points to novelist Brett Lotts use of the personal essay in the
composition classroom. Put simply, Lott uses the
Writing-Teacher-As-Writer model. He incorporates fictive terminology
plot, character, dialogue to encourage students desire to write; and
he is willing to point to flaws in selected written passages in order to
promote revision. Another writer, Mary Ann Cain, believes that by
simply being in the presence of the writing-teacher-as-writer, [and
being] aware of that teachers way of understanding what a good story
is, [the student] is able to discern what her story is capable of, and
is therefore able to revise it. Both of these writers, in Sanzs view,
point to the fact that the goals of the writing class are communicated
as much through an understanding of the kind of writer the teacher is as
it is by the prompts or the syllabus. In addition, Sanz offers one final
tool for the composition classroom: an inter-mingling of creative and
essayistic prose. He cites scholar Doug Brents argument that the same
sort of tropes that are sometimes held to characterize poetic language
can be shown to crowd into non-poetic or ordinary language. [And
likewise] ordinary language is replete with fictive speech acts such
as imitation, joking, hyperbole, hypothesis, even extended narrative.
Put simply, Sanzs paper calls for a continued examination of ways to
get students to want to write. He is careful to say that his
assessment-via-creative writing-workshop method and the
writing-teacher-as-model do not set aside pragmatic and practical
ends. Rather, he calls for a balance between meeting the practical needs
of composition and, yet, encouraging some kind of desire in student
writing. He suggests an assessment procedure that is generated out of
workshops wherein students have agency; and he affirms pedagogical
models wherein writing teachers are willing to display their enthusiasms
about good writing without being proscriptive.
David Danielss paper, entitled Revising Revising, or the
William Stafford Bastard-Method of Assessment rethinks revisions: To
revise does not always mean to get better. It sometimes means to get
far, far worse to become bogged down in ideas and theories, to lose
sight of original impulses and lines of reasoning. Daniels challenges
the concept that revision is one more step toward mastery, a final
stage. What is generally deemed the final stage in the writing process
is, in fact a messy one indeterminate, sometimes disastrous,
sometimes very good, and he rejects a simple three-fold process of
invention, drafting, revising, which is still often taught in
composition classrooms, in favor of teaching a more recursive revision
process similar to that used in a creative writing program poetry
workshop which might include an open-ended plan and a revision theory
that embraces indeterminacy. Seeking to interrogate teaching practices
and to challenge over-simplified models, Daniels sides with scholar
David Russell who sees writing as one universal process rather than as
plural processes.
In considering the nature of revising, Daniels offers three anecdotes
that he tells his writing students:
A. The poet Carolyn Kizers narrative concerns the rejection of a one of
her poems by her high school literary magazine. Stubbornly and with
youthful drive, she, at 17, pressed to get it published and succeeded in
placing it in the New Yorker. Daniels explains the moment: Ha
ha, Kizer must have thought, as any 17-year-old would. Yet, and heres
the part of the story I emphasize to my students, it took Kizer more
than a decade later before another poem of hers was acceptedand not
just by the New Yorker, but by anyone.
B. The poet Alan Dugan, at a young age, published his first book of
poems in 1961; and this book went on to win the Pulitzer Prize. Twenty
years later, Dugans New and Collected Poems was a slow progress
for such a promising talent. Daniels points out that what took him so
long was revision: Dugan revised some of those early poem, and with the
eye of maturity and growth, looking backward, he is famous for having
revised them badly. Critics hated the new versions, and in this sense,
revision made worse his early brilliance.
C. The poet William Stafford, perceiving the hyper-professionalization
of poets in his classroom in the emphasis they placed upon finished
products and in their need to correct in order to get published,
resisted the role of end-all advice giver. Instead, Stafford invented a
workshop form wherein the student whose poem was being discussed was
asked to leave the room. Daniels writes: (That bastard!) Students would
in turn receive written comments from their peers, but not from Stafford
himself. (That royal bastard!) In other words, Stafford deprived his
students precisely of what they most wantedhis Godly stamp of approval
or, in most cases, his stamp of disapproval with advice on how to
improve. Irresponsible cruelty, on Staffords part, or brilliance? In
the classroom moment, without the writer of the poem present, Stafford
offered advice abundantly. He questioned each poems means of execution.
He asked questions about diction, scope, and purpose. He suggested
advice for revision even as the student writer, the author of the
piece, the one most likely to benefit from Staffords wisdom was in
the hallway. In Daniels analysis, Stafford set aside his role as
authority figure and became a collaborator, critiquing each student poem
in the same way he might look at a Sylvia Plath poem or a John Donne
poem. The students, then, took away what Stafford said and applied the
method to their own revisions: Something like, Boy, Tinas metaphor
was really maudlin and so is mine. Or, Tommys linebreaks really
mangle this rhythm and so do mine!
Daniels asks that teachers consider becoming co-investigators, students
themselves in the classroom. He suggests that they learn to let go
and, in Lee-Ann Kastman Breuchs terms, to recognize their methods of
teaching as indeterminate activities rather than as exercises in
mastery. This, of course, does not mean a loosening of standards in the
name of flexibility, but rather a flexibility that changes according to
the immediate present of each students text. . . working
collaboratively with students to unearth potentials and possibilities of
each student text. . .[recognizing] the unique situatedness of each
piece of writing rather than merely relying on foundational principles
or rules.
In his comparison of Staffords method with composition classrooms,
Daniels admits that most writing teachers are not working with graduate
students in poetry who might take easily to Staffords provocative
method. Daniels knows that Staffords poets will continue revising
without him, that poems themselves may be more flexible and capable of
absorbing weirder contours than traditional essayistic prose or
traditional work in genres, and that, perhaps, there simply are more
rules to live by in the composition classroom. But Daniels is thinking
in larger terms. He is considering the turn toward non-essayistic prose
in favor of the multimodal project, the evolution of new literacies, and
the public and civic writing genres --all of which, as he says, indicate
extensive and varied and weirdly contoured possibilities. Daniels,
like Stafford, would discourage the student impulse to privilege
product over process or the view of product as the end-all and
revision as a guarantee of perfection. He just wants writing students
and teachers to see that revision can fail to achieve what we had hoped
or bargained for; and he admits that the development of a new way of
seeing revision is not easy. He continually finds himself confronted
with the questions about, How can I make this paper an A? He is often
thrust into the position of being regarded as a proxy for all potential
readers. He is well aware that students still see process as an early
stage and regard product/grade as the heart of the matter. He has,
however, adapted a version of Staffords Bastard Method. He assigns
Revision grades based on the quality of the finished student product
and upon the creativity and complexity of the revision. He requires a
Revision Memo with each draft in which students explain their
decisions on revisions and cite one or more of their classmates drafts
as points of comparison. He asks that students comment on their
shortcomings based on their perceived audiences. Then, too, in peer
review sessions, he listens to and assesses verbal feedback as well as
asking students to cite specific examples of useful feedback from peers.
He wants the students to learn the truth about revision, that revision
can make things worse, that revision is a sort of trying-on of
possibilities, that revision can be a final act of invention and not
perfection. In the end, Daniels hopes that his revision of The Stafford
Method does re-envision revision as a chance to improve the quality of
student writing and to increase students awareness both as writers and
as critical readers of their own and others work.
WPA Table of Contents Page
|

 |