As a graduate student planning to teach writing at college level, I'm seeking best practices in grading and assessing 21st-century writing. I created this research blog to post responses to scholars, methods, and ideas about assessing writing in digital environments that I study. I invite suggestions and feedback from experienced educators, graduate teaching assistants and graduate students of writing programs--what does and doesn't work in digital writing courses? Please post your comments below. I appreciate any research you recommend, particularly links to articles, videos, websites and blogs. - Karen Pressley, Kennesaw State University

Monday, March 21, 2011

Assessing Writing through the E-Portfolio: Some Pros and Cons

(Note: Literature about online portfolios refers to these artifacts with a variety of terms such as e-folios, E-Portfolios, portfolios, and others. In this blog I use the form of the term used by the article in which I found it).

In my earlier post on March 21, I wrote of Russel Durst’s 2006 article on the history of composition studies that includes an extensive summary of writing assessents. He notes Kathleen Yancey’s 1999 article, "Looking Back as We Look Forward: Historicizing Writing Assessment" that discusses the E-Portfolio as a strong tool for assessing student’s writing development. While Yancey argues for E-Portfolios, there are other pros and some cons to consider. Durst says:

    "Indeed, one often-sited benefit of E-Portfolio assessment is that grading can be deferred until late in a course and students ostensibly can focus instead on developing as writers and thinkers, without being distracted by worrying about the dreaded grade. Other scholars have countered that students' concerns about assessment are never far below the surface, no matter how much instructors seek to de-emphasize grading, and that evaluation anxiety may be most intense in courses that offer the least feedback on student performance..."

Yancey's work on portfolio assessment is viewed by Durst as a landmark, a most helpful guide for instructors. But he also contrasts her work with skeptical comments by other scholars. For example, Durst writes that Peter Elbow argues with worry that "teachers and administrators might be led to adopt a reductive holistic score, undermining the complexity of a diverse portfolio, and also that a portfolio approach could overemphasize assessment, thus undermining risk, discovery, and play," key aspects of writing for Elbow and others in composition.

Durst writes that Condon and Hamp-Lyons found that “teachers reading portfolios often made their judgments early, before having read the majority of texts.  They conclude that program administrators need to work proactively and closely examine the work of the various stakeholders to ensure that the assessment is doing what they think it should be doing."

Revisionist work on portfolios by Broad (2003) takes what Durst terms “a hermeneutic as opposed to psychometric approach. Broad argues that portfolios are most useful in bringing teachers together to discuss criteria and raise pedagogical, evaluative, and theoretical issues in writing instruction, and that programs should not try to establish system-wide standards or develop rubrics.”

Around the time Durst wrote the aforementioned article in 2006, the CCCC published a position statement, "Teaching, Learning, and Assessing Writing in Digital Environments", and a position statement on "Writing Assessment" which they updated on March 2009 from the original 2004 version.  These are the most concise articles I have seen on the topic. They give me a sense of stability as I wade through the extensive literature about assessing writing through traditional methods and the newest literature about digital compositions. I especially appreciate the guiding principles these statements offer, which are too many to include in this blog's limited space. 

The CCCC position statement on Principles and Practices in E-Portfolios includes details about how E-Portfolios communicate various kinds of information for the purposes of assessment. The article is followed by a list of links to more than thirty examples of different forms of E-Portfolios, what they call "well-conceived e-portfolio projects," two of which I have included here to view:

This CCCC page provides links to instructional videos for creating online portfolios, such as this YouTube video that provides a source of instruction for creating an "eFolio" and links to collections of student portfolios, such as these selected 2010 portfolio showcase winners.   The site offered by Northwestern University in Evanston, Illinois, provides a discussion, "The Digital Convergence: Extending the Portfolio Model" (2004) that elaborates on "ePortfolio thinking" and offers theories behind different formats such as showcase ePortfolios, structured ePortfolios, learning ePortfolios, as well as assessment ideas for the different formats.

Overall, it seems that the E-Portfolio has, as Durst said, "emerged as the form of writing assessment most preferable to composition specialists for its heuristic as well as its evaluative power," but this method is not used without drawbacks as some of these scholars note. 

When I think about a digital writing classroom, I believe that assessments toward specific goals need to be the first thing on the instructor’s mind when designing the curriculum. It seems that the E-Portfolio would be a particularly effective way to assess digital writing, but only if criteria with specific rubrics for each project are provided by the instructor so students can use these as guides upon which to develop their work and improve their writing.

Examining a Summary of Assessment Methods: A Plethora of Choices

I found a lengthy article written by Russel K. Durst in 2006 that offers a comprehensive summary about assessments, past and present, and articulates the prodigious output of postsecondary students from 1984 through 2003.  The article was originally published in Research on Composition: Multiple Perspectives on Two Decades of Change; I found it in the Norton Book of Composition Studies. Durst, a professor and head of the English Department at the University of Cincinnati, wrote to express his interest in the intellectual foundations of composition studies and to discuss where we are headed as a discipline.

After providing a detailed description of the evolution of composition studies, he concludes that the field is in a rut for the lack of a defining feature or powerful orthodoxy within composition students to work against, such as current-traditional teaching or the cognitive emphasis. ( I think if he were to republish his article, he would revise it to say that the field has developed something to work against-- the emergence of multimodal compositions and their role in the writing classroom).

Like Durst, I am interested in ways in which well-designed assessments for composition in general and the digital writing class in particular can serve as a vehicle for students' personal and intellectual development, self-understanding, and creative expression.  I've bulleted a few of his key points, and comment after each:
  • Durst comments on the influence created by the cultural studies movement in the 1990s, relevant to the critical theory I mentioned in my previous  two posts. He makes a significant point about designing assessments for writing that stem from some kind of institutional standard--whose standard? "As an academic movement, cultural studies sought to redefine culture away from its elite and exclusive sense or as a high/low binary, while taking seriously the cultural pursuits of everyday people and showing the relation of those pursuits to people's social class consciousness." Applying this to assessment, I'm wondering how workable it would be to create assessments for the people sitting in the classroom seats versus some other over-arching standard. Clearly each institution must know its students and establish grading standards accordingly. 
  • He writes of how working-class students tend to do poorly in college composition, referring to Mike Rose who argues that marginalized students often know much more than they seem to but will respond best to approaches that welcome them to the academy, concentrate on students' strengths, and avoid focusing inordinately on surface mistakes.  I can see how designing assessments that emphasize strengths is more goal-oriented, rather than penalizing weaknesses, and could help students to become better writers. 
  • Durst refers to Ira Shor, who examines the inner workings of a critical pedagogy for working-class students in which students help to choose the course subject matter, requirements, and goals for assessment.  I comment on Shor's work in my last blog post. 
  • "Evaluating the quality of student writing, whether as a placement strategy, during a course, or at the exit point, has been and remains a major part of writing instructor's activity and researchers' inquiry..." He does mention the e-portfolio as an assessment tool, but no other tools, other than to say "...and development of new approaches to teacher response have taken place in the past twenty years. Composition scholars...often show considerable discomfort with the emphasis on assessment. Negative associations with the act of grading are common, such as Belanoff's 1991 reference to grading as 'the dirty thing we do in the dark of our offices'. "

  • Durst discusses how the politics of assessment has figured prominently in the research literature of the late 20th- and into the 21st century. Beginning with Richard Bullock and John Trimbur in  The Politics of Writing Instruction: Postsecondary (1991), composition specialists undertook a rethinking of the nature and purpose of assessment, wishing to enhance its formative qualities and move away from the exclusivist notion of assessment as a weeding out process. (Bullock and Trimbur revised this work in 2011). In this volume, Schwegler (1991) argues against universal standards and for a different paradigm in which the teacher is viewed as a fellow reader and a writing coach rather than an authoritarian and prescriptive reader.  Agreeing with  Althusser's view that "education helps reproduce the dominant relations of production in a society, Schwegler acknowledges that teachers will always retain power but believe that they can undermine that power and make the class more egalitarian by responding to student work as readers and collaborators and by foregrounding rather than suppressing questions of value and ideology."

  • Durst discusses the works of Ball (1997) who looks at the interaction of culture and assessment as it impacts low-income students of color. Ball believes that writing assessment is a part of the power culture that exists in educational institutions and that there is a need to include the voices of more teachers from diverse backgrounds in dialogue concerning writing assessment.

  • Holdstein (1996) attempts to conceptualize a system of writing assessment more congruent with feminist notions of self-reflectivity and inclusiveness. She writes on topics such as the social character of scholarly writing; the factors of gender and feminism in assessment programs; and power, genre, and technology.

  • Durst mentions Huot's comprehensive analysis of research on writing assessment in "Toward a New Theory of Writing Assessment," that ends with a detailed discussion of theoretical principles that should underlie a programmatic assessment of student's written work at any level of education, emphasizing the local, context-dependent nature of such activity.  Durst says "all of these efforts to reshape writing assessment--while not yet serving to eliminate traditional forms of social and academic hierarchy--have succeeded in sensitizing writing instructors and program directors to problems with conventional assessment and in persuading many to reduce such problems as much as is possible within the constraints of higher education."

  • Yancey's 1999 article addresses the concepts of reliability and validity that have functioned as opposing poles in the history of writing assessment, as composition specialists have struggled to develop effective forms of assessment.  Reliability refers to the idea that different raters should assess as consistently with one another as possible. Validity traditionally is defined as making sure a test measures what it is supposed to measure. Durst said that in recent years, composition professionals have focused more on validity in favoring portfolio, and less on reliability, which is easier to achieve with a more controlled, holistically scored essay test. 
Durst discusses another idea that emerged--the DSP, or directed self-placement method, proposed by Royer and Gilles (1998, 2003). Under this approach, composition administrators speak to incoming students, expanding the options open to them and recommending the appropriate course for students based on their perceived sense of writing ability.  While I believe Paolo Freire and Ira Shor would applaud this approach, I question whether students have sufficient perspective upon which to base their placement decision and whether there is enough empirical evidence that directed self-placement works effectively. 

Exploring the literature of assessing writing has been productive, a valuable orientation as I prepare to teach. While there are guidelines and institutional parameters, nothing about assessment is written in stone. I find that I have plenty of proven methods to choose from, and new ones to consider as I plan my classroom writing assignments. 

I'm intrigued by the fact that in 2006 Durst wrote this extensive summary of assessment, its history in composition studies and its direction for the future, without differentiating specifics for multimodal composition or digital writing in general. My next post picks up on the point of the e-portfolio as an assessment tool.  

Principles for Assessing Writing: Whose Views Become the Standard?

My post of March 19 touched on my interest in critical theory/critical pedagogy and how I might apply it to a digital writing classroom.  While searching for others sources to make connections between critical pedagogy and assessment of digital writing, I found, "The Relationship Between Critical Pedagogy and Assessment in Teacher Education." 

The article addresses my question in my last post about assessment methods that could be construed as discriminatory (from the perspective of critical pedagogy), and offers some principles I am considering applying in my writing classroom. The text I pulled from the article is listed as a principle in bold, and my comments follow:  
  • "To achieve a critical approach to assessment, it must be centered on dialogic interactions so that the roles of teacher and learner are shared and all voices are validated."
I find that dialogic interactions between myself as instructor with my students would be a natural aspect of qualitative assessments, but not with quantitative assessments. Short-answer essays, contract-based grading, and the like, encourage the voice of the student and a writer/audience relationship between teacher/student. 
  • " It must foster an integrated approach to theory and practice, or what [Paolo] Freire would preferably term as praxis - theory in action."
Theory in action could include the teacher working with students to develop critical consciousness--help them to develop the ability to define, to analyze, to problematize the economic, political, and cultural forces that shape their lives, and to see that these forces do not completely determine their lives. Using blogging in the writing classroom is a great way to do this.  KSU graduate student David Caudill's idea of using Twitterfall at the beginning of class as a freewriting tool for students to write responses to news events, is another.  Assessment of this type of writing would need to be qualitative and designed with the goal of building the student's language usage that leads toward the student becoming an agent of action. 
  • "It must value and validate the experience students bring to the classroom and importantly, situate this experience at the center of the classroom content and process in ways that problematize it and make overt links with oppression and dominant discourses."
My comment to the above post relates here as well.  Another idea is to get students to write (blog, perhaps) about life experiences around themes such as family life, work, marriage, social interaction, that represents students' perceptions of the world. The instructor could orient the students to look at the presence of a dominant ideology that may be present, determine the source of it, look at how it is reproduced, decide whether it needs to be disrupted. This would make for some great writing exercises. My assessment of this type of exercise would be qualitative, centered on the student's degree of focusing in on the theme, their representation of it, the support of it through the body of the text, etc., using basic rhetorical principles.  
  • "It must reinterpret the complex ecology of relationships in the classroom to avoid oppressive power relations and create a negotiated curriculum, including assessment, equally owned by teachers and students. Such an approach no doubt creates challenges and discomfort but opens up creative possibilities for the reinvention of assessment."
I like Ira Shor's idea (March 19 post) of using contract grading, or, with having students review my proposed rubric and then modifying it, so they could have a voice in its creation.  I've seen Dr. Laura McGrath do this in my Digital Technology in the Writing Classroom, when it came time to establishing an assessment tool for our class's collaborative statement on the class wiki. I enjoyed having the option to participate in this; however, I did not give my input as I wanted to observe how other students responded to the rubric without my comments included.

Some questions I raise now, as I did during my study of critical pedagogy last semester, are: Does critical pedagogy impose interpretations and ideas about oppression? Who determines what is oppression and who the oppressors are? Are we using doublespeak as we look at "emancipatory authority" of the instructor?  And what do our students have to gain from a scrutiny of values and conditions that work to ensure their privilege (the privilege of many)? These are points to be considered while designing assessments for writing exercises.


Saturday, March 19, 2011

Applying Critical Pedagogy to Assessing Writing Today: Ideas from Ira Shor

I know I am still at the tip of the iceberg when it comes to knowing the factors that affect assessment of writing, particularly multimodal compositions. Aside from considering the elements of such compositions--such as sound, visual images, document design and layout, interactivity, and content--I'm particularly interested in underlying pedagogies relating to assessment.
While studying pedagogical approaches in graduate composition studies courses, I was  interested in critical pedagogy stemming from educators including Paolo Freire, bell hooks, Henry Giroux, Doug Kellner, Ira Shor, and critical theory about social power and authority from Michel Foucault and Antonio Gramsci. 21st Century Schools, an organization that provides educational staff development and curriculum design resources, provides a detailed flow chart of how critical pedagogy developed since the days of Plato and Aristotle. An overriding theme in critical pedagogy is commitment to empower the powerless and transform conditions which perpetuate injustice, i.e. discrimination in education. 
For my current research on critical pedagogy, I explored Ira Shor’s 1996 book, When Students Have Power. This text exemplifies his ideas as a proponent of teaching democratically and sharing power with students versus a traditional, patriarchal, patronizing approach of teachers being the source of knowledge who pour information into students. I hoped to find ideas about grading and assessments that could be applied to the 21st Century classroom. 
I simplify Shor’s ideas about quantitative and qualitative assessment for the sake of space in this blog. But I do want to share a quote that underscores critical pedagogy in the writing classroom, that shapes his choice of quantitative and qualitative grading methods.  He introduces his pedagogy by disclosing his own journey as a teacher: “A teacher who draws from a trust-me, I have your best interests at heart when it comes to grades is an infantilizing attitude.” Shor had, in the past, fallen back on ethos, or his sincerity, ethical posture, his identity as a dead-serious teacher, his face and voice radiating fairness, competence, good intentions, and so on.  He admitted that this infantilizing attitude had been a way to maintain his authority by giving a paternal, mysterious response to a reasonable, direct student query about what deserves an A, B, or less. 
      
“Politically, unilateral authority benefits from infantilizing the students if a traditional teacher talks down to them as if they are children, then that makes the teacher papa, the boss. This personalizes the power relations in a patriarchal way.  Now, students hate to be patronized, commanded, or manipulated, which is what paternal vagueness about grading does, by mystifying the power relations of the classroom into unspoken subjective standards of judgment exercised by an elder whose authority must not be questioned.” (p. 81).

Thus, he developed what he calls democratic teaching, a method that emerged as a result of critical theory in education, where teachers share power and authority with students in classrooms. He differentiates between quantitative and qualitative assessments, which I find to be immediately applicable to writing assessments today.  Before reading his work, it had not occurred to me that using quantitative assessments could be construed to be discriminative against students who may be of a lesser income bracket and work more than one job, thus having little time to do school work. Quantitative assessments award numbers (numbers of words, pages, exercises written). Many in this bracket don’t own their own computers and have to go to a computer center or library to do assignments. This is in contrast to the environment of wealthier students who may receive support from parents, have higher paying jobs, own not only a desktop but possibly also a laptop and other mobile devices through which they can do volumes of assignments more easily.
 
Shor contrasts his quantitative grading to his reasoning behind qualitative assessment methods, wherein he uses rubrics and “contract grading.” In his qualitatively-graded assignments, he employs activities such as open-book essay tests and take-home exams which ask students to think through substantive issues.  I draw a parallel to this in today’s classroom by having students take home tests that require them to do in-depth research online to find various sources of information that cause them to substantively think through answers.

Shor designs rubrics for assignments that he grades qualitatively, and gives the rubrics to the students in advance to use as they develop their work. This gives the students the opportunity to self-assess their work as they go, enabling them to progressively work toward improvements. I have seen this type of grading method used in graduate level classes, but not at the undergraduate level. 

Shor's ideas raise questions for me about today’s classrooms: Would it be discriminative, then, to use quantitative assessment methods, such as number of blog posts, number of pages written, word counts, attendance, absences, and other quantitative factors in grading?  Shor draws from Meier’s 1989 text, “The Case Against Standardized Achievement Tests” that Terry Meier wrote for Rethinking Schools. His research found that standardized test scores also correlate with race and with family income, being biased against students from working, poor, and non-white homes.  I plan to find contemporary research on this that reflects 21st Century statistics so I can make comparisons.



As a note, Ira Shor is a Professor of Rhetoric/Composition at the City University of New York's Graduate Center and the College of Staten Island/CUNY (as of 2010). He gave a keynote address, "Can Critical Teaching Change the World?" at the Alternative Education Resource organization Feb. 20, 2010.  Watch this video of his talk.  In it, he raises these questions:  "Can critical teachers indeed change the world for the better? Can classrooms inviting students to question the status quo, to consider inequality and injustice in society, to probe the ethics of power and the civics of knowledge–transform a cynical, conservative, test-tormented age into a new progressive era?"

His questions are relevant to any digital writing classroom where students become engaged in analyzing current issues and exercising critical thinking skills.  

Wednesday, March 16, 2011

Teaching and Assessing Multimodal Compositions--What's Style Got to Do With It?

As I explore the literature on assessing new media compositions, I’m connecting more dots between problems expressed by scholars and theories that address those problems. With an interest in what causes instructors to balk at teaching and assessing digital writing, I am struck by a statement made by Kathleen Yancey in her 2004 article, “Looking for sources of coherence in a fragmented world” (Computers and Composition, 21 (1), 89-102):

    “...we seem comfortable with intertextual composing [in which print and digital literacies overlap], even with the composed products. But we seem decidedly discomforted when it comes time to assess such processes and products.” 

My blog posts over recent days are relevant to Yancey’s point about lack of writing instructors’ comfort with assessments. One would think that along with new technologies comes new thinking and expectations for adapting one’s ways to new applications.
But her point and my recent observations covered in earlier posts raise a question about the reason for instructors' discomfort. Perhaps the discomfort stems from a another factor; perhaps it relates to style--choices by instructors of what and how to teach; student’s choices of elements to include in compositions; and instructor’s choices of how to interpret these elements. 

What's the connection between multimodal composition, assessments, and style? I am discovering this while developing another project, a syllabus for a freshman composition class centered on the theme of “style.”
I submitted my first draft of this in a recent graduate class, “Teaching Writing in High Schools and Colleges.” It should get students interested in developing a writing style, like a dancer chooses her dance steps, a musician chooses his performance style, etc. Style is all about choices, and choices reflect who the individual is. 

During my subsequent exploration into “style,” I found the work of Winston Weathers in The Writing Teacher’s Sourcebook (368) (edited by Edward Corbett, Nancy Myers, and Gary Tate, 2000), originally published in 1970 in College Composition and Communication. Weathers’ article, “Teaching Style: A Possible Anatomy” discusses how writer’s choices of words, collections of words, and "larger units" of composition all designate a writer’s style. I couldn’t help but connect his concept to multimodal composition.

Weathers writes that “in the art of choosing what to write, one can and must choose from something. We need to explain that certain real materials exist in style--measurable, identifiable, describably...Real material that serves as the substantive foundation of style is of three general kinds: individual words; collections of words into phrases, sentences paragraphs; and larger architectural units of composition.” 

In 1970, before computers or the idea of multimodal compositions ever hit classrooms, Weathers noted these larger units of composition that appear to be beyond words.  I translate this to our contemporary term, visual rhetoric, that can include video, audio, graphics, photos, etc., as larger architectural units of composition. 

Weathers says, “What the teacher writes on the blackboard in front of the student, or even what the teacher writes outside of class and brings to read to his students, is the teacher’s commitment to the style he is urging his students to learn. Perhaps some of the difficulties in teaching style arise because of teacher failure, not failure in sincerity or industry or knowledge, but failure in demonstrating an art and a skill.  Teacher failure ever to write and perform as a master stylist creates an amazing credibility gap.”

Master stylist?  I doubt that few instructors who find themselves teaching multimodal composition in writing classrooms today would call themselves master stylists of this craft.

Weathers also says, “Many students write poorly and with deplorable styles simply because they do not care; their failures are less the result of incapacity than the lack of will.”  Applying his idea to instructors, it’s understandable why an instructor might find it daunting to introduce any digital technology which the instructor has not yet mastered. But what we master as well as what we avoid or ignore is our choice. So I’m thinking, a teacher’s style of teaching--i.e. traditional methods versus 21st Century ones, could stem from a lack of will to invest in shifting gears and learning something utterly new when an instructor’s role is already challenged enough by the daily routine. But at the end of the day, choices reflect the teacher’s style.

Weathers says, “I think we should confirm for our students that style has something to do with better communication, adding as it does certain technicolor to otherwise black-and-white language. But going beyond this “better communication” approach, we should also say that style is the proof of a human being’s individuality; that style is a writer’s revelation of himself; that through style, attitudes and values are communicated; that indeed our manner is a part of our message...how we choose says something about who we are.”

Just as students who compile a rhetoric comprised of words and other modes are developing a style of composition through their choices, a teacher who hasn’t does this herself is hardly able to grade or assess such work, and thus has a style somewhat incompatible with her student’s.

I find Weathers’ approach to the matter of style to be quite insightful. It enables me to connect the concept of style with multimodal composition and see why an instructor might be uncomfortable assessing such work if she has not partaken in it herself.  What to do about this is part of my journey.

Assessing Multimodal Compositions: A Balancing Act for Graduate Student Instructors

My last two posts talk about using social media in writing classrooms and their assessments. In my search for  literature on this topic, I found a good example of how three graduate students at Bowling Green State University in Ohio are dealing with similar topics. These grad students are doing a balancing act as they deal with old methods of assessment applied to new methods of writing. What they found, however, surprised me.
The attached article, "The New Work of Assessment: Evaluating Multimodal Compositions" shares their experiences of teaching in a writing program that requires them to incorporate visual rhetoric into their first-year composition classrooms. They say that they are struggling with the task of applying their department-wide writing rubric to their assignments that ask their students to create multimodal texts.
Elizabeth Murray, Hailey Sheets, and Nicole Williams comment on BGSU's updated rubric from 2005 that added a new assessment category of "format and design." They welcome the new addition, but still feel unsure about how to apply the rubric to their students' multimodal projects. These authors say that they lack the departmental authority to write their own rubrics or amend the current rubric as it stands.  They decided to be proactive and tackle the issue by surveying other instructors' attitudes regarding assessments of multimodal compositions.  Murray, Sheets, and Williams discovered that they were not the only ones  who were challenged by the activity of assessing new media compositions and who needed further guidance on how to go about it.
The webtext they created shows how a traditional writing program rubric that is used to evaluate "alphabet-only" texts can, in fact, also be used to assess multimodal compositions, but requires modification to do so.  Their findings provide categories and resources for multimodal assessments.
Section One of their findings draws from digital composition theory and engages in the conversation surrounding multimodal theory and assessment. This text emphasizes that multimodal projects should be evaluated based on rhetorical principles.   Section Two provides their survey results from composition instructors that reflect their current assessment practices.
Section Three suggests how TAs and/or instructors can utilize their current rubric to assess multimodal compositions.  Here they use Ball State's writing program rubric for their analysis.  For example, this is how they used the Ball State rubric to assess a student's thesis/focus in "alphabetical text" and how they modified this to show how the same rubric would be applied in a multi-modal composition:
"Ball State Rubric for Thesis/Focus: Demonstrates an awareness of audience, is sophisticated, and is clearly established and  maintained throughout.
Multimodal Project: In a multimodal composition, an awareness of audience is demonstrated through a well-chosen selection of both words and images that best meet their needs and persuades the audience of their argument. The argument—or thesis—will not be presented in a single alphabetic sentence as it is in a traditional essay; instead, the thesis will be evident throughout the essay in the variety of modes that are chosen. Focus will be demonstrated by each mode consistently contributing to the overall argument or thesis of the composition."
Murray et al didn't rewrite the rubric; they simply expanded the contexts of how to apply it, for example, within which the thesis could be presented, to include visual rhetoric expressed through other modes besides text. 
Section Four provides examples of multimodal compositions from their students which these TAs assessed using their traditional rubric.  Their examples include a digital film, a collage, a slideshow, and a flash animation.  For example, you can view the slideshow created by student #3 as well as the instructor's assessment from a multimodal composition approach, followed by the instructor's assessment. They include traditional rhetorical terms and concepts in their reviews. This shows how their evaluation of the work is based on rhetorical principles, but the language is different in that it includes new media concepts. The language is modified to suit the  characteristics and nuances of the media used.
Their work inspires me in that it de-mystifies the task of how to assess multimodal compositions. It's an example of how we can rely on proven traditional methods while bringing them current to address expanded applications in contemporary work. It also supports my view that traditional methods do need new language that is inclusive of new technological options. 

I think instructors are prudent to heed Kathleen Yancey's warning against using "the frameworks and processes of one medium to assign value and to interpret work in a different medium" (excerpted from "Looking for sources of coherence in a fragmented world" (2004). But in her article, "Between Modes," I believe Madeleine Sorapure correctly interprets Yancey's statement to be made in the context of concern for losing the chance to see new values emerging in the new medium, rather than the need to develop a new assessment for each new technology.

I add a concern that scholar in the field of composition studies may err in thinking that every digital technology needs its own form of assessment, as that could result in throwing out best practices of writing assessment in favor of newer untested ones, just to be "current."

Tuesday, March 15, 2011

Assessing Facebook-based Writing Assignments: A Need for New Pedagogical Content Knowledge?

In my March 14 post, I wrote about a dialogue in my Digital Technology in the Writing Classroom course between several graduate students about how to use blogging and Twitterfall in writing assignments. Another discussion focused on aspects of using Facebook-based writing assignments.  These group members (one who is a middle-school teacher and two who are TAs of freshman composition) concurred that implementing such an assignment is daunting. 

My interest lies in the assessment of a Facebook-based writing assignment. To even think about how to develop an assessment of such an assignment, though, I am snagged by the potential problems of implementing such an assignment, as shared by these teachers:

One felt it was important to harness the student’s interest in Facebook and other social media, but expressed a concern about the drawbacks of using such an idea. Daily in her classroom, she encounters the constant need to monitor what her students are looking at on the computers, and whether or not they are on task. So, she says a Facebook-based assignment could “defeat the purpose” since the students are kids and don’t self-manage. Perhaps this suggests that such an assignment should be given to college students of the responsibility level to self-manage their time and stay on-task.

Another student argued against social media-based writing assignments in the classroom due to her frequent encounters with computers serving less as a help with engaging students and more as a distraction in the course room.  However, she did support the idea of using Facebook if there was a clear, purposeful way to go about it.  One idea was having students do a rhetorical analysis of a Facebook page to gain an understanding of the rhetorical triangle within the content.

I found an article through onlineuniversities.com,
“100 Inspiring Ways to Use Social Media in the Classroom” that lists exactly that.  One of my student group members reviewed this article after I recommended it and said she found some great ideas on how to implement some of these suggestions. However, among these innovative ideas, I see no information for assessing or grading any of these activities. Nor did my group member mention anything about how, if she did implement any of these ideas, this would fit in with her overall planning for student development or how she would grade the projects or assess her student's work. I didn't ask her for a response on outcome or assessments, but it's interesting that her consideration of using this did not automatically include a comment about that.

As I listen to pros and cons about implementing Facebook-based or other social media-based writing assignments, I notice that students  aren’t expressing ideas about the outcome and the grading of such assignments as much as they are looking at the process of the assignment. This suggests to me that the process could appear so daunting that the student instructor has trouble getting past the idea-implementation phase before she can even think about how to assess the product and grade the assignment. I wonder, is it the newness of the process, the newness of the technologies (hardware and software) that creates this effect, or another factor? 

One response I found to my question comes from Punya Mishra and Matthew Koehler at Michigan State University in their article, “Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge.” Their article proposes a conceptual framework for educational technology that addresses the phenomenon of teachers integrating technology into their pedagogy:

          “Part of the problem, we argue, has been a tendency to only look at the technology and not how it is used. Merely introducing technology to the educational process is not enough. The question of what teachers need to know in order to appropriately incorporate technology into their teaching has received a great deal of attention recently...the primary focus should be on studying how the technology is used.” 

While Mishra and Koehler emphasize how teachers would learn to use technology, they state this in the context of teacher education that addresses pedagogical content knowledge (PCK). “...Pedagogical content knowledge is of special interest because it identifies the distinctive bodies of knowledge for teaching. It represents the blending of content and pedagogy into an understanding of how particular topics, problems, or issues are organized, represented, and adapted to the diverse interests and abilities of learners, and presented for instruction.”

Mishra and Koehler discuss reasons why teachers don’t embrace new technologies, including fear of change and lack of time and support to learn the tools, techniques and skills that go along with them. I question, without this training, how can a teacher even conceptualize a method of assessment and grading of such a new beast?

In my review of literature on this topic, I find myself favoring the idea that a teacher's knowledge of technology is more than an important aspect of overall teacher knowledge--it is an essential one, ranking equal to the domains of content and pedagogical knowledge. Without it, teachers could feel incapable of grading or assessing an activity of which they are not familiar, like having to grade a musical composition or a biological finding when one does not teach music or biology.

If a teacher doesn’t understand the technology--I don’t mean computers here, I mean social media software and networking functions, in this case--then doesn't this fall under a lack of pedagogical knowledge about the processes and practices or methods of teaching (i.e. using social media) and learning and how it encompasses overall educational purposes, values, and aims? Thus, in absence of pedagogy on this topic, a teacher would not be able to conceptualize how to grade or assess writing projects such as those mentioned here, because there is a lack of context in the overall writing scheme of the class.

Mishra and Koehler comment, “A teacher with deep pedagogical knowledge understands how students construct knowledge, acquire skills, and develop habits of mind and positive dispositions toward learning. As such, pedagogical knowledge requires an understanding of cognitive, social, and developmental theories of learning and how they apply to students in their classroom.”

In the case of my group members (self included), it seems that we would benefit from stronger pedagogical content knowledge applicable to the teaching of specific content, specifically using various digital technologies (software) in the writing classroom, such as Facebooking and other social media. This knowledge should include knowing which approaches to teaching fit the content, and likewise, knowing how elements of the content can be arranged for better teaching.

Monday, March 14, 2011

Assessing Writing Assignments Using Social Media--How Now Do We Do It?

In my Digital Technology in the Writing Classroom graduate course, my group explored ideas (through a discussion thread) about how to integrate social media into in-class or online writing assignments. One student, Caitlin Martinez, initiated the idea of using blogging like freewriting. She would get the students to write blog posts at the beginning of class in response to reading assignments, with the goal of getting them to just think and write.  Another student, David Caudill, thought of students using Twitterfall in the writing classroom. His students would explore current events in the news via Twitterfall, then students would post tweets as freewriting based on the discussion. (As an alternate to tweeting, the students could post responses on a collaborative writing page). In either case, the class would review the writings together.

Another student hinted that she views computers as more of a hindrance to learning as they are not used for clear-cut purposes, but function more as entertainers and threats to achieving learning outcomes. Taking that teacher's opinions into account, I wondered whether my other classmates' ideas are merely a way of utilizing computers in the classroom, with no purpose?  I think not. These are both creative and productive ideas, as they contribute to the invention phase of the writing process, and thus serve a clear-cut purpose.  Both Martinez’s and Caudill’s ideas would engage students in the assignment by giving them a way to use social media during class time (to build skills in personal expression), but with the instructor’s intention to use it as a tool for freewriting.

As a writing instructor, my first thought goes to how to assess these types of writing activities.  I asked Martinez if she would do this as a graded activity--would students get points for participation, or what kind of incentive would we give the students to motivate them to write, and to write well?  She said she would assign a participation grade to the blogging rather than to grade a freewriting assignment according to quality.  Her goal for this activity would be helping students to express themselves and put their thoughts on paper/screen. Caudill’s goal was also focused toward the purpose of freewriting.

Both Martinez’s and Caudill’s ideas remind me of the work of expressivist composition scholar Peter Elbow. In “Some Thoughts on Expressive Discourse: A Review Essay” (1991), which I accessed in Norton’s Composition Studies, p. 935, he wrote, “The rhetorical expressionists viewed writing not as a rhetorical act or a practical means of communication but as a way of helping students become emotionally and psychologically healthier and happier, more fulfilled and self-actualized.” Martinez’s, Caudill’s, and Elbow’s goals, then, are less about the product and more about the student’s personal state following the process.

How do I add a quantitative assessment to this? Does an instructor need to? I think the answer is, I don’t. I don’t need to devise a different assessment method to this form of writing just because the students are using blogging or Twitterfall to freewrite.  I turned to the St. Martin’s Guide to Teaching Writing and reviewed the article, “Teaching Invention” (151-173). The article quotes extensively from Elbow’s Writing Without Teachers. He emphasizes the importance of not grading freewriting/invention exercises:


    “The requirement that the student never stop writing is matched by an equally powerful mandate to  the teacher: never grade or evaluate freewriting exercises in any way. You can collect and read them—they are often fascinating illustrations of the working of the mind—but they must not be judged. To judge or grade freewriting would obviate the purpose of the exercise; this writing is free, not to be held accountable in the same way as other, more structured kinds of writing.  Be use to tell students that you will not be grading their freewriting.  The value of freewriting lies in its capacity to release students from the often self-imposed halter of societal expectations. If you grade or judge such creations, you will convey the message that this writing is not free.” 

I think it's best to save assessments for other types of writing assignments where a grade is essential to help the student evaluate how he/she is progressing. This frees me to use social media and other digital technologies in the invention phase of the writing process, without having to devise an assessment method for this.  I do believe students are motivated by grades for completing assignments, or they won’t want to do them, so I’ll allocate a participation point system just for doing the post. But I won’t judge writing done in a freewriting context, whether it is done through blogging, tweeting, using a keyboard and word processor, or using pen and paper.

Thinking this through by reviewing others’ ideas enabled me to see that we don’t need to create a new assessment method each time we use a new technology in writing; we need to assess according to purpose and student outcomes. 

Monday, February 21, 2011

Assessments of Digital Writing: How Do We Identify and Value Competencies?


In pre-digital writing classrooms, instructors sought student outcomes that focused on developing language arts literacy.  But in digital writing classrooms, those outcomes have been expanded to developing literacies--textual, visual, aural, participatory (social networking), software and hardware, and on. So, as we develop assessment tools for multimodal composition classrooms, where should instructors place their focus if they don’t want to abandon their emphasis on traditional core writing instruction?

As one example, The National Writing Project’s Because Digital Writing Matters describes the Michigan Department of Education’s “Content Expectations” (2006) that focuses on the tools and techniques of digital writing in their high schools, which can be transferred to the college student as well (pp. 102-103). These standards of assessment layer digital writing onto other valued competencies in literacy:
  1. Blogs, web pages - the student must write, speak, and create artistic representations to express personal experience and perspective;
  2. Multimedia presentations - use the formal, stylistic, content, and mechanical conventions of a variety of genres in speaking, writing, and multimodal presentations;
  3. Management of print and electronic resources - develop a system for gathering, organizing, paraphrasing, and summarizing information; select, evaluate, synthesize, and use multiple primary and secondary resources;
  4. Use of technological tools - Use word processing, presentation, and multimedia software to provide polished written and multimedia work; 
  5. Make supported inferences and draw conclusions based on informational print and multimedia features; use various visual and special effects to explain how authors and speakers use them to infer the organization of text and enhance understanding, convey meaning, and inspire or mislead audiences;
  6. Examine the ways in which prior knowledge and personal experience affect the understanding of written, spoke, or multimedia text; 
  7. Understand the commercial and political purposes of producers and publishers - learn how they influence not only the nature of advertisements and the selection of media content, but the slant of news articles in newspapers, magazines, and the visual media.
    (Note: All bold text was the emphasis of the original author, not mine).
I wish that my writing throughout my high school and early college years would have been subject to the above standards of assessment.  I would have become a more critical analyzer of information rather than a student who wrote what I thought my professors wanted to hear in the format they wanted to see (a traditional academic essay).

Not that that format lacked value; I needed to learn it as a foundation of writing. But it wasn’t the end-all, be-all as it was treated. I needed to acquire more and other literacies, as evidenced by the fact that as a graduate of the MAPW program, I discover myself lacking in personal digital literacies that I was not even aware of until taking the Digital Technology in the Writing Classroom course. Thus, I support assessment methods and standards that push students into the unfamiliar and that cause measurable development in a variety of digital literacies.

Eve Bearne wrote of standards of assessment applied to the development of student writers as "multimodal textmakers." I’d like to know more about how to describe the development of students from: "a multimodal text maker in the early stages, to becoming an increasingly assured multimodal text maker, then becoming a more experienced and often independent multimodal text maker”  (Eve Bearne, 2009, p. 105). 
I’m interested in how instructors have implemented these or other similar standards in assessing students’ writing, and any rubrics used that cover these dimensions. I find it somewhat daunting to create such a rubric, so I’d love to see more examples of workable assessment tools. 

Wednesday, February 16, 2011

Principles of good practice in digital writing classrooms--what are they?

I'm working with a team of graduate students at Kennesaw State University (Spring 2011) to develop a position statement on digital technology in the writing classroom. My group is establishing principles of good practice for a digital writing/new media classroom, as well as faculty responsibilities that support those good practices. 

Below are six principles I wrote as part of our collaborative statement.  I welcome your feedback on these principles. 
(I cite page numbers from The National Writing Project's Because Digital Writing Matters with Danielle DeVoss, Elyse Eidman-Aadahl, and Troy Hicks (2010, Jossey-Bass):
 
          First, as teachers of digital writing, we must engage in an ongoing review and refinement of current practices, and invent new ones for digital literacy. In doing so, we need to assure that principles of good practice governing these new activities are clearly articulated. Thus, we make the following assumptions about writing courses that engage students in developing digital literacies. Features of these courses should:

   

         a. provide students with a clear articulation that being a digital writer is composed of information literacy, including the development of ongoing skills in research, file saving, file storage, file transfer, composing, revising, editing, and the ability to manage, analyze, and synthesize multiple streams of simultaneously presented information in participatory environments (p. 13, 97); 

   

         b. address the rhetorical complications and implications of paper-based and digitally-mediated texts to enhance the critical dimensions of students’ thinking and writing (p. 14, 59);

   

          c. focus on the participatory culture that digital literacies sustain; and thus build skills including “play, performance, simulation, appropriation, distributed cognition, collective intelligence, judgment, transmedia navigation, networking, negotiation, and visualization” (p. 11, 13). 
    
   

          d. develop proficiency with tools of technology, but should transcend specific technologies so students can change and evolve with technology, rather than remain rooted to skills anchored to one particular tool or technology (p. 40);

   

          e. equip students to be more than passive information consumers (p. 13, 31), and take into account literacy barriers, language barriers, and cultural diversity, since the Internet tends to be geared toward English-speaking middle-class learning methods (p. 31) that divide the “have’s” and “have nots” (p. 12, 31); 

   

          f. lead to specific student outcomes as represented in an e-portfolio, that minimally demonstrate a progressive development of Yancey’s four-part framework: self-knowledge; content knowledge; task knowledge; and judgment (p. 109-110).

The full position statement written collaboratively by the graduate students of my class may include an introduction written by our professor, Dr. Laura McGrath.  As a class, we are considering submitting this article for publication. I will post a message on my blog with a link to the published document in the event that we get it published.