As a graduate student planning to teach writing at college level, I'm seeking best practices in grading and assessing 21st-century writing. I created this research blog to post responses to scholars, methods, and ideas about assessing writing in digital environments that I study. I invite suggestions and feedback from experienced educators, graduate teaching assistants and graduate students of writing programs--what does and doesn't work in digital writing courses? Please post your comments below. I appreciate any research you recommend, particularly links to articles, videos, websites and blogs. - Karen Pressley, Kennesaw State University

Monday, March 21, 2011

Examining a Summary of Assessment Methods: A Plethora of Choices

I found a lengthy article written by Russel K. Durst in 2006 that offers a comprehensive summary about assessments, past and present, and articulates the prodigious output of postsecondary students from 1984 through 2003.  The article was originally published in Research on Composition: Multiple Perspectives on Two Decades of Change; I found it in the Norton Book of Composition Studies. Durst, a professor and head of the English Department at the University of Cincinnati, wrote to express his interest in the intellectual foundations of composition studies and to discuss where we are headed as a discipline.

After providing a detailed description of the evolution of composition studies, he concludes that the field is in a rut for the lack of a defining feature or powerful orthodoxy within composition students to work against, such as current-traditional teaching or the cognitive emphasis. ( I think if he were to republish his article, he would revise it to say that the field has developed something to work against-- the emergence of multimodal compositions and their role in the writing classroom).

Like Durst, I am interested in ways in which well-designed assessments for composition in general and the digital writing class in particular can serve as a vehicle for students' personal and intellectual development, self-understanding, and creative expression.  I've bulleted a few of his key points, and comment after each:
  • Durst comments on the influence created by the cultural studies movement in the 1990s, relevant to the critical theory I mentioned in my previous  two posts. He makes a significant point about designing assessments for writing that stem from some kind of institutional standard--whose standard? "As an academic movement, cultural studies sought to redefine culture away from its elite and exclusive sense or as a high/low binary, while taking seriously the cultural pursuits of everyday people and showing the relation of those pursuits to people's social class consciousness." Applying this to assessment, I'm wondering how workable it would be to create assessments for the people sitting in the classroom seats versus some other over-arching standard. Clearly each institution must know its students and establish grading standards accordingly. 
  • He writes of how working-class students tend to do poorly in college composition, referring to Mike Rose who argues that marginalized students often know much more than they seem to but will respond best to approaches that welcome them to the academy, concentrate on students' strengths, and avoid focusing inordinately on surface mistakes.  I can see how designing assessments that emphasize strengths is more goal-oriented, rather than penalizing weaknesses, and could help students to become better writers. 
  • Durst refers to Ira Shor, who examines the inner workings of a critical pedagogy for working-class students in which students help to choose the course subject matter, requirements, and goals for assessment.  I comment on Shor's work in my last blog post. 
  • "Evaluating the quality of student writing, whether as a placement strategy, during a course, or at the exit point, has been and remains a major part of writing instructor's activity and researchers' inquiry..." He does mention the e-portfolio as an assessment tool, but no other tools, other than to say "...and development of new approaches to teacher response have taken place in the past twenty years. Composition scholars...often show considerable discomfort with the emphasis on assessment. Negative associations with the act of grading are common, such as Belanoff's 1991 reference to grading as 'the dirty thing we do in the dark of our offices'. "

  • Durst discusses how the politics of assessment has figured prominently in the research literature of the late 20th- and into the 21st century. Beginning with Richard Bullock and John Trimbur in  The Politics of Writing Instruction: Postsecondary (1991), composition specialists undertook a rethinking of the nature and purpose of assessment, wishing to enhance its formative qualities and move away from the exclusivist notion of assessment as a weeding out process. (Bullock and Trimbur revised this work in 2011). In this volume, Schwegler (1991) argues against universal standards and for a different paradigm in which the teacher is viewed as a fellow reader and a writing coach rather than an authoritarian and prescriptive reader.  Agreeing with  Althusser's view that "education helps reproduce the dominant relations of production in a society, Schwegler acknowledges that teachers will always retain power but believe that they can undermine that power and make the class more egalitarian by responding to student work as readers and collaborators and by foregrounding rather than suppressing questions of value and ideology."

  • Durst discusses the works of Ball (1997) who looks at the interaction of culture and assessment as it impacts low-income students of color. Ball believes that writing assessment is a part of the power culture that exists in educational institutions and that there is a need to include the voices of more teachers from diverse backgrounds in dialogue concerning writing assessment.

  • Holdstein (1996) attempts to conceptualize a system of writing assessment more congruent with feminist notions of self-reflectivity and inclusiveness. She writes on topics such as the social character of scholarly writing; the factors of gender and feminism in assessment programs; and power, genre, and technology.

  • Durst mentions Huot's comprehensive analysis of research on writing assessment in "Toward a New Theory of Writing Assessment," that ends with a detailed discussion of theoretical principles that should underlie a programmatic assessment of student's written work at any level of education, emphasizing the local, context-dependent nature of such activity.  Durst says "all of these efforts to reshape writing assessment--while not yet serving to eliminate traditional forms of social and academic hierarchy--have succeeded in sensitizing writing instructors and program directors to problems with conventional assessment and in persuading many to reduce such problems as much as is possible within the constraints of higher education."

  • Yancey's 1999 article addresses the concepts of reliability and validity that have functioned as opposing poles in the history of writing assessment, as composition specialists have struggled to develop effective forms of assessment.  Reliability refers to the idea that different raters should assess as consistently with one another as possible. Validity traditionally is defined as making sure a test measures what it is supposed to measure. Durst said that in recent years, composition professionals have focused more on validity in favoring portfolio, and less on reliability, which is easier to achieve with a more controlled, holistically scored essay test. 
Durst discusses another idea that emerged--the DSP, or directed self-placement method, proposed by Royer and Gilles (1998, 2003). Under this approach, composition administrators speak to incoming students, expanding the options open to them and recommending the appropriate course for students based on their perceived sense of writing ability.  While I believe Paolo Freire and Ira Shor would applaud this approach, I question whether students have sufficient perspective upon which to base their placement decision and whether there is enough empirical evidence that directed self-placement works effectively. 

Exploring the literature of assessing writing has been productive, a valuable orientation as I prepare to teach. While there are guidelines and institutional parameters, nothing about assessment is written in stone. I find that I have plenty of proven methods to choose from, and new ones to consider as I plan my classroom writing assignments. 

I'm intrigued by the fact that in 2006 Durst wrote this extensive summary of assessment, its history in composition studies and its direction for the future, without differentiating specifics for multimodal composition or digital writing in general. My next post picks up on the point of the e-portfolio as an assessment tool.  

1 comment:

Essay Help said...

A good writer has to have a good mind to see things that aren't there, things that are different from his or her reality, but also able to convince the readers that those things are true or relatively true.