Skip navigation
Degree Programs

Newsletters

Composition MOOCs: Learning to Write with 67,259 Others

Doug Hesse
Professor and Executive Director of Writing
The University of Denver

Lest you groan at yet another celebration of/handwringing about MOOCs and their promise/threat as savior/moloch of higher education, I'm focusing here on one set of courses—those aspiring to teach writing—and one issue within them: the role of practice and feedback. I'll leave it for others (there are surely plenty) to dissect MOOCs in general or MOOCs in psychology, chemistry, or art history. The heart of the issue for me is whether practice-and-feedback courses—courses with a longstanding "studio" tradition, courses like writing or painting or violin playing—are fit for this environment.

early snow

There are three main features of Massive Open Online Courses. One, of course, is their "openness," their availability to students not formally admitted to an institution or tested/pre-requisited into a course. Another is their on-linedness. Digital design allows participation by students removed not only in distance but, to varying degrees, in time. More significantly, though, this allows a wide welter of course materials, not only readings and images but also videos and sound, materials posted not only by instructors but also students. Of course, this is possible in any online environment, even conventional ones offered through Blackboard.

The game-changing feature, at least for writing, is massiveness. Long-established guidelines set about 60 students as the maximum that writing teachers can effectively teach at a time, whether in four courses of 15, three of 20, or one of 60. Of course, this number is often exceeded, and near-heroic teachers often staunch reductions in quality. However, even exceeding the 60-student guideline by a factor of two or three or five would not begin to meet the needs of the 67,530 students enrolled as of March 21 in the Duke MOOC "Composition I: Achieving Expertise." This course was designed and is taught by Denise Comer, whom I know and respect. Students complete four pieces of writing in 12 weeks: a critical response, an explication of an image, a case study, and an op-ed. In the process they encounter several topics standard in many composition courses. You can read more at https://www.coursera.org/course/composition, and I'll say more below.

Before understanding the challenge of massiveness, however, you need to understand how people learn to write. There is, of course, a "content" to writing that can be "delivered," a set of features that typify different types of writing, a complex set of strategies for analyzing tasks and audiences, inventing and revising content, editing for style and effect, and so on. This content has been codified (and re-codified) since Aristotle's Rhetoric, embodied in textbooks for nearly two centuries. It exists as precepts and advice, often illustrated by examples, and while recent research into discourse communities, genres, and activity theory has significantly complicated that content, it's the sort of stuff that can be packaged online about as well as it can be packaged into a physical lecture. You can test whether students know that stuff ("Define kairos. Define enthymeme."), but knowing about writing doesn't signify one's ability to write, any more than knowing about football signifies one's prospects as quarterback.

Lectures about writing–whether by flesh or by megabyte–have limited sufficiency for the simple fact that people don't learn to write merely through information or exhortation. They learn it no more than folks learn to play piano by reading piano books or watching videos, only never to lay hands on a keyboard. Furthermore, writing skills develop through doing over time, with "ability" defined on a long continuum. Someone who can play "Mary Had a Little Lamb" knows how to play piano, but surely not in the sense of someone who can play Liszt's "Sonata in B Minor." Students moving through college quite naturally encounter more complex tasks with which they struggle and develop some facility, only to encounter yet others.

"Learning to write," then is a process of acquiring new skills through advice and trial, activities sequenced and coached. Vital to that process is feedback, a combination of "here's my reaction to your writing," "here are your strengths and areas for improvement," "here's some advice on how to revise this piece or approach future tasks like it," and "here's what you should do next." Traditionally, of course, giving feedback has been the role of writing teachers; in fact, I'd call it the main role, with well over half the hours that one spends teaching writing devoted to giving feedback, next with designing assignments and activities, then with generating resources and leading discussions, with "lecturing" a distant trailer.

You see the challenge, then, for writing MOOCs: if practice and feedback truly matter, how do we accomplish them in a course with tens of thousands of students, especially given the nature of writing as opposed, say, to the nature of political science or even mathematics?

There are three possibilities. One is to have enough teachers or TAs to provide feedback. Consider the 67,530 students in the Duke MOOC. Supposing, say, that each reader/respondent were going to work with 200 students, that's 338 faculty. Even if we imagine that enough qualified folks are available–and will work for pay as modest as offered by places like The University of Phoenix– that's a fair cost, especially if one of a MOOC's promises is being free (or very cheap) to users.

A second possibility is to do away altogether with teachers as respondents, having computers do the work. I've written extensively and critically about whether computers can or should score writing. (Please see PDF, "Can Computers Grade Writing").  In a nutshell, though, the tasks on which computers seem competent are relatively short and extremely-well circumscribed ones, fairly unlike authentic college writing assignments. For an overview of the research on machine scoring of writing, see http://humanreaders.org/petition/research_findings.htm.

That brings a third option: crowd-source the responsibility for feedback, most pointedly to other students in the MOOC. That's the approach Duke is taking. Denise Comer and her colleagues developed response rubrics for each assignment. When a student submits a draft, he or she receives drafts from four other students and is asked to rate and reply to the other writers. Students then revise their work based on the peer feedback they get. They resubmit. Once again, four peers read and rate the piece, this time against a grading rubric. As the course teacher, Comer provides model responses and analyses, and she selects some submitted writing to discuss and explicate. However, except for those relatively few texts, writing in the course is exclusively coached and judged by other students.

Peer response is a venerable pedagogy even in traditional writing classes. The practice ranges from folks giving holistic, written responses to ideas in a classmate's draft to students applying multiple analytic rubrics to specific features. However, in traditional classes, even traditional classes taught online, peer response is but one aspect of feedback; the teacher has a vital, even central role. In the Duke MOOC (and many like it) feedback and evaluation are entirely ceded to course peers. Research that compares peer responses to expert responses, focusing on samples scored by the same groups, can tell us about the quality of the former, much as Calibrated Peer Review began doing over a decade ago (http://cpr.molsci.ucla.edu/Home.aspx). There can be empirical answers to how accurately a crowd-sourcing MOOC can perform the feedback function crucial to teaching writing. I have serious reservations. It may be that, in terms of checking off a rating, peers can do well enough, but I suspect that, in terms of providing substantive comments like those that expert teachers make, the gap will be significant. Yes, I could be wrong, and I'll wait for the research. I will note, though, that Comer herself offered, on the Writing Program Administrator's listserv, "I do not believe that a MOOC can ever replace an in-person writing course, or an online writing class of the kind currently offered with limited enrollment and for credit/tuition." She then went on to make an eloquent explanation/defense of the those purposes that a writing MOOC could serve.

Ultimately, I'd be challenged to design a writing course knowing that I, as expert teacher, would serve considerably more as clockmaker god than as traditional reader, coach, critic, collaborator, and advisor. I wonder, further, what would be lost in other dimensions, of relationships conventionally formed with students. In asking this last question, I don't see the distinction as face-to-face v. online. Clearly, one can know online students pretty well through sustained interactions over time. Just not 67,530 of them. Whether any of these concerns matter for other fields, especially those employing the pedagogy of lecture and multiple-choice tests, I can't say. But these are the salient issues for MOOCing the teaching of writing.

__________

Contact: Doug Hesse | dhesse@du.edu | 303-871-7447 
The University of Denver Writing Program, 282 Anderson Academic Commons, Denver, CO 80210