As a graduate student planning to teach writing at college level, I'm seeking best practices in grading and assessing 21st-century writing. I created this research blog to post responses to scholars, methods, and ideas about assessing writing in digital environments that I study. I invite suggestions and feedback from experienced educators, graduate teaching assistants and graduate students of writing programs--what does and doesn't work in digital writing courses? Please post your comments below. I appreciate any research you recommend, particularly links to articles, videos, websites and blogs. - Karen Pressley, Kennesaw State University

Wednesday, March 16, 2011

Assessing Multimodal Compositions: A Balancing Act for Graduate Student Instructors

My last two posts talk about using social media in writing classrooms and their assessments. In my search for  literature on this topic, I found a good example of how three graduate students at Bowling Green State University in Ohio are dealing with similar topics. These grad students are doing a balancing act as they deal with old methods of assessment applied to new methods of writing. What they found, however, surprised me.
The attached article, "The New Work of Assessment: Evaluating Multimodal Compositions" shares their experiences of teaching in a writing program that requires them to incorporate visual rhetoric into their first-year composition classrooms. They say that they are struggling with the task of applying their department-wide writing rubric to their assignments that ask their students to create multimodal texts.
Elizabeth Murray, Hailey Sheets, and Nicole Williams comment on BGSU's updated rubric from 2005 that added a new assessment category of "format and design." They welcome the new addition, but still feel unsure about how to apply the rubric to their students' multimodal projects. These authors say that they lack the departmental authority to write their own rubrics or amend the current rubric as it stands.  They decided to be proactive and tackle the issue by surveying other instructors' attitudes regarding assessments of multimodal compositions.  Murray, Sheets, and Williams discovered that they were not the only ones  who were challenged by the activity of assessing new media compositions and who needed further guidance on how to go about it.
The webtext they created shows how a traditional writing program rubric that is used to evaluate "alphabet-only" texts can, in fact, also be used to assess multimodal compositions, but requires modification to do so.  Their findings provide categories and resources for multimodal assessments.
Section One of their findings draws from digital composition theory and engages in the conversation surrounding multimodal theory and assessment. This text emphasizes that multimodal projects should be evaluated based on rhetorical principles.   Section Two provides their survey results from composition instructors that reflect their current assessment practices.
Section Three suggests how TAs and/or instructors can utilize their current rubric to assess multimodal compositions.  Here they use Ball State's writing program rubric for their analysis.  For example, this is how they used the Ball State rubric to assess a student's thesis/focus in "alphabetical text" and how they modified this to show how the same rubric would be applied in a multi-modal composition:
"Ball State Rubric for Thesis/Focus: Demonstrates an awareness of audience, is sophisticated, and is clearly established and  maintained throughout.
Multimodal Project: In a multimodal composition, an awareness of audience is demonstrated through a well-chosen selection of both words and images that best meet their needs and persuades the audience of their argument. The argument—or thesis—will not be presented in a single alphabetic sentence as it is in a traditional essay; instead, the thesis will be evident throughout the essay in the variety of modes that are chosen. Focus will be demonstrated by each mode consistently contributing to the overall argument or thesis of the composition."
Murray et al didn't rewrite the rubric; they simply expanded the contexts of how to apply it, for example, within which the thesis could be presented, to include visual rhetoric expressed through other modes besides text. 
Section Four provides examples of multimodal compositions from their students which these TAs assessed using their traditional rubric.  Their examples include a digital film, a collage, a slideshow, and a flash animation.  For example, you can view the slideshow created by student #3 as well as the instructor's assessment from a multimodal composition approach, followed by the instructor's assessment. They include traditional rhetorical terms and concepts in their reviews. This shows how their evaluation of the work is based on rhetorical principles, but the language is different in that it includes new media concepts. The language is modified to suit the  characteristics and nuances of the media used.
Their work inspires me in that it de-mystifies the task of how to assess multimodal compositions. It's an example of how we can rely on proven traditional methods while bringing them current to address expanded applications in contemporary work. It also supports my view that traditional methods do need new language that is inclusive of new technological options. 

I think instructors are prudent to heed Kathleen Yancey's warning against using "the frameworks and processes of one medium to assign value and to interpret work in a different medium" (excerpted from "Looking for sources of coherence in a fragmented world" (2004). But in her article, "Between Modes," I believe Madeleine Sorapure correctly interprets Yancey's statement to be made in the context of concern for losing the chance to see new values emerging in the new medium, rather than the need to develop a new assessment for each new technology.

I add a concern that scholar in the field of composition studies may err in thinking that every digital technology needs its own form of assessment, as that could result in throwing out best practices of writing assessment in favor of newer untested ones, just to be "current."

No comments: