Wednesday, June 13, 2007

Essential Articles

Here are my summaries of some essential articles for our research
deb
Callow, J. (2003, April). Talking about visual texts with students. Reading Online, 6(8), Available: http://www.readingonline.org/articles/art_index.asp?HREF=callow/index.html
Using the multiliteracies visual design concepts of Kress and van Leeuwen (1996) Callow investigated what metalanguage students used when talking about visual aspects of their multimedia texts.
Two Australian teachers, each working with 25 6th grade (11-year old) students participated in the study. The science and English curriculum were combined with the school’s computer technology program to create the six week unit of study. The context of the study consisted of students investigating food production and working together to create PowerPoint slide presentations, which integrated text and image. Working in groups of four or five, researchers provided students with several facts. Students combined the facts; paraphrased; sequenced the information; and included text, images, sound, and animation as part of their multimedia presentations.
Researchers used a qualitative approach (Merriam, 1998). Sources of data included field notes, discussions with teachers, collection of work samples, and group interviews with students about their work. Discussions with students included their comments on the features of: image, color, selection, salience, and layout. In addition, the researchers also asked the students evaluative questions about the effectiveness of their use of the visual features in presentations. Student perceptions of what qualities made a good slide show—including features of color, selection, image, salience, and layout—were the main criteria for evaluating presentations.
When asked what makes an effective PowerPoint, the students noted, intuitively: color (15 students), animation (10 students), sounds (8 students), text features (7 students), backgrounds (6 students), and pictures (5 students). However, when asked why they chose a particular element, few students were able to express specific reasons for their choice. Interestingly, students decided that photographs and clipart would be effective in different circumstances. They noted that photographs were “more realistic” and denoted a serious tone; more effective for adults. Clipart, on the other hand, would be an effective visual for younger children or a less serious tone.
In terms of metalanguage, students discussed many features of design in terms of comparing their work to books or other visually enhanced texts. Although the students were unable to discuss the elements in terms of a specific metalanguage, they were able to justify whey they made particular choices.
The strength of this article is that it investigates an issue essential to students competing in a technological global economy: the creation of effective presentations. Although written texts remain important means of communication, final presentations in businesses increasingly include multimodal “texts.”
I also found weaknesses in this article. First of all, it would have been helpful to see more examples of student work or vignettes that detailed a presentation. In addition, the researcher noted the nature of PowerPoint as linear in nature as opposed to weblike. However, using PowerPoint to create a museum kiosk-like presentation, students can easily add hyperlinks with buttons within the show, between shows, and with online documents. Perhaps at the time of the study, the version of PowerPoint did not include these features. On the other hand, few people are familiar with the interactive features of PowerPoint, including action buttons and custom animation.
I found the verbal reports of effectiveness compelling—so compelling that I plan to use this article as a major element in my dissertation. Another strength noted was that this study was easy to follow and included detailed descriptions of the presentations. However, I would have liked to see more details about the actual creation of PowerPoint process.
Implications of this article include the fact that when working with visual and multimodal texts, students need to understand technical skills, but also how these elements create meaning. In particular, they must understand how the features of color, salience, images, and layout design impact the effectiveness of the presentation. Educators need to understand the use and meaning-creation potential of these features.
Integration of multiliteracies in new learning environments is a new and exciting concept; one I intend to study in detail over the next couple of years. With the advent of social networking sites and video sharing (YouTube) anyone can publish a multimedia message. No longer are elements of design strictly in the hands of the professionals; amateurs can use simple design tools to create their own messages. Schools must keep in touch with reality literacy. What types of literacies are effective now and what types of literacies will be effective in the future?




Article 2
Semali, L., & Fueyo, J. (2001, December/January). Transmediation as a metaphor for new literacies in multimedia classrooms. Reading Online, 5(5). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=semali2/index.html

This research used a case study approach to investigate transmediation in terms of exemplars noted in the classroom situation.
In the article, the authors first defined key terms:
• Multiple sign systems: art, movement, sculpture, dance, music, words, digital, and multimedia.
• Transmediation: responding to cultural texts in a range of multiple sign systems.
• New literacies: (the ability to read, analyze, interpret, evaluate, and produce communication in a variety of textual environments and multiple sign systems” (p. 1).
Then, following a well-developed literature review, the authors discussed their central concerns:
• “What is the relationship between what students know and the signs they encounter in their classrooms (about race, class, gender, disability, and sexual orientation)?
• What meaning do they make of these semiotic systems in their literacy practices?” (p. 3).
The authors provided some detailed cases, which illustrated exemplars of transmediation activities. Then they discussed the first scenario in terms of semiotics. However, when I turned from page five to page six, I thought a major part of the article was missing. The authors simply noted, “equally, the other scenarios aim to open our eyes to a variety of symbolisms, codes, and conventions…” They failed to analyze the other scenarios in terms of sign systems, transmediation, and new literacies. What began as a very exciting article, fell short of satisfying my desire to learn more about how the real-life cases related to background theory.
Despite the weaknesses in the analysis, I found the format clear and easy to follow. I plan to use a similar format to write up results for a qualitative study on multimedia creations.
Article 5
Muffoletto, R. (2001, March). An inquiry into the nature of Uncle Joe’s representation and meaning. Reading Online, 4(8). Available http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/muffoletto/index.html
In this article, Muffoletto addressed critical or reflective visual literacy. In terms of visual literacy, Muffoletto noted that a diversity of meanings has traditionally been devalued in classroom settings. Reflective visual literacy empowers students to understand the power of the image and to evaluate images based on their personal experiences. Comprehending the process of reflective visual literacy is only possible if teachers incorporate the notion of multiple perspectives into their daily teaching. Using photo essays, students should be allowed to express their own voices and describe their own perceptions of how the image reflected their experience. The ultimate power of reflective visual literacy is that it situates visual representations and their interpretation (construction of meaning) in a context that raises issues about benefit and power.
Muffoletto provided an extensive discussion of how individuals perceive image as text. Images—and our perceptions thereof—are not natural. We see what our eye and brain let us see. We experience the world through a reality that has been constructed for us through social and biological limitations. “Like texts, visual representations (visual texts) are the result off ideologically formed intentional acts…the visual text, as a representation that stands in place of an object or concept, requires a social codification—the construction of meaning through a system of codes used by the author and reconstructed by the reader.”
Muffoletto discussed the “fluid representational nature of icons, signs, and symbols” he found in photographs. At one moment the picture is an icon (this is a picture of…), a sign (usually I associate this with…), and a symbol (more complex associations. Meanings are assigned to the image by individuals who are members of historical social communities (Fish 1980)—including gender, race, religion, cultural, economic perspectives.
Muffoletto further grounded the concept of visual literacy within Semiotics, the study of signs, which could be a useful tool for understanding social and historical construction of meaning. Semoitics positions representation from three perspectives: icons, signs, and symbols. An Icon, he noted, is a representation with a strong perceptual relationship and the object for which it stands (Barthes, 1964). Signs are conventions—“agreed upon abstractions that we associate with some thing or concept.” Letters, colors, shapes, or images on a screen mean nothing by themselves. We need to organize and assign meaning to them. Symbols (Langer, 1976) are instruments of thought; they work differently from icons and signs because rather than corresponding directly to objects/concepts, we use conceptual frameworks. For example, a star, or image of star may refer to a religion, but it also symbolizes all the particular religion stands for (Wollen, 1969).
Reading implies intention to construct meaning. From modernist perspective the meaning of the texts lies within the text itself, placed there by the author. The role of the reader is to find the truth. From a postmodern perspective, the meaning is a result of interaction between reader and text. The meaning is constructed by author and reader. Muffoletto stated that constructing meaning can be seen through two different lenses: politics and pedagogy: Traditionally teachers have been responsible to give the “official” truth or meaning of texts. Standardized tests emphasize this. Diversity of meaning is devalued. These practices are a result of seeing through only one lens. Reflective and critical analysis practices allow a democratic reconstruction of images.
The principles of critical visual literacy are essential in an ever-increasingly visual world faced by children in their out-of-school literacy contexts. Muffoletto noted that the foundations of reflective visual literacy require that students value the differences of understanding and expression involved with the construction and deconstruction of all texts as social products. Furthermore, as technology changes, our understanding of “reality” changes. Muffoletto stated that educators must consider new literacies in terms of power relationships and how meaning is constructed.
Article 8
Messaris, P. (2001). New literacies in action: Visual education. Reading Online, 4(7). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/action/messaris/index.html
Messaris, a leading researcher and theorist in the area of visual literacy, argued for a deepening of visual literacy education beyond a critical analysis of visual texts. He noted that the process of creating visual images contributes more to students’ understanding of the multiplicity of visual information to which they are exposed in a multimedia saturated world. However, despite the exposure to media, Messaris asked, are students indeed “media savvy”? Furthermore, he noted that one cannot assume that the consumption of visual images leads to improvement in a student’s creative abilities.
Then Messaris went on to describe the theoretical implications of the connection between visual creativity and greater cognition as defined by “spatial intelligence” (Gardner, 1983). Spatial intelligence, he noted, is the “process of forming mental representations of three-dimensional reality as a basis for understanding one’s environment and interacting with it effectively. It is a type of intelligence crucial for success in professions such as architecture or carpentry, but it is also a vital ingredient of any person’s everyday physical activities.” Messaris provided examples of how a film editor uses multiple devices for constructing meaning, including zooms, pans, transitions, focus, spatial layout, angle, etc.
Finally, Messaris discussed the implications of visual literacy for education. He noted that students must learn to create visual meaning, not just consume it. Visual connections come easy to experienced viewers. However, the ability to create multimedia creations, combining images, does not come so easily. It is a form of knowledge of a visual grammar that comes through active learning. Through the act of communicating through images, students move beyond seeing media as a “window on reality” to a more enlightened state where they are able to construct new realities through the manipulation of visual conventions. The higher order spatial and analogical thinking skills used in film editing, Messaris argued, carry over to other realms of experience; therefore, learning these skills should “be considered the core objective of an actively oriented visual curriculum” (p. 8).
Lemke, J. (2006). Toward critical multimedia literacy: Technology, Research, and Politics. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 3-14).
In school, students are taught to carefully analyze and deconstruct text. However, most often, the accompanying visual images are ignored. Although multimedia texts outweigh monomodal (writing only) texts, school-based curriculums tend to ignore visual literacy. With the rise of the World Wide Web, “reading” images has become an even more essential skill in decoding multimodal texts, such as web pages.
Lemke argued that we need a “broader definition of literacy itself, one that includes all literate practices, regardless of medium” (p. 4). Texts, he stated, are converging; television programs have websites and so do popular books, movies, and video games. For example, Harry Potter, which began as a print media phenomenon, has moved to websites, movies, television commentaries, and even video games. Similar content, images, and textual themes are distributed over a variety of media.
In light of the complexity of today’s literate activities, Lemke discussed a necessity of conceptual frameworks to help us “cope with the complexity and the novelty of these new multimedia constellations” (p. 5). The field of social semiotics (also known as critical discourse studies, critical media studies, and critical cultural studies) has been developing key concepts. The core idea of semiotics is that all human meaning-making shares a number of features. In multimedia semiotics, these common features form the basis on which the integration across media is possible. The fundamental unit is the meaning-making practice; whether that practice is sued to create or analyze such texts. The model of meaning-making applies across disciplines and across semiotic systems. In fact, we can never make meaning with one mode alone, Lemke argues,
If you speak it, your voice also makes nonlinguistic meaning by its timbre and tone, identifying you as the speaker, telling something about your physical and emotional state, and much else. If you write it, your orthography presents linguistic meaning separately from additional visual meanings (whether in your handwriting or choice of font) (p. 5).
All communication, Lemke noted, is multi-modal communication. He further defined multimodality as “the combination or integration of various sign systems or semiotic resource systems, such as language, gesture, mathematics, music, etc.” (p. 5). The resulting product is a form of gestalt—the whole not only greater than the sum of its parts, but the way in which meaning is represented through a variety of modes effectively is much more important than any one mode.
For example, when we interact with a website, we make trajectories across links that carry us to a wide variety of different genres and different media. We not only surf within sites, we surf between sites; often discovering video, audio, and interactive media that accompany more traditional words and static images. Lemke notes that we are learning “to make meaning along these traversals” that are “relatively free of the constraints of conventional genres.” Additionally, the intertexts create meanings of their own. As Lemke noted, “as our culture increasingly enmeshes us in constellations of tgextual, visual, and other themes that are designed to be distributed across multiple media and activities…these cross-activity and cross-medium connections tend to become coherently structured” (p. 7).
In terms of these multimedia texts, Lemke noted the necessity of key questions that should be answered as we prepare to teach critical multimedia literacy. In most multimodal presentations, different modes are used to represent meaning. One cannot simply deconstruct the verbal message and obtain the whole meaning. Likewise, one cannot simply decode the visual design elements. These modes must function together. Techniques of multimodal analysis must “show how text and images are selectively designed to reinforce one another” (p. 8). No single meaning can be projected across a single modality. The sign from each medium can portray a different message; some of these may create perverse or divergent meanings.
As educators, we must teach students to become specialists in critical multimedia literacy in order for them to make free and democratic choices. To be critical, Lemke notes, is not “just ot be skeptical or to identify the workings of covert interests…it is also to open up alternatives, to provide the analytical basis for the creation of new kinds of meanings” (p. 13). A true critical discourse helps students not only to critique, but to create, author, and produce multimedia texts.
My Commentary: In the new world of Web 2.0—a world where end-users, consumers, teachers, and students are creating content for themselves and peers—self-generated online texts can be created as word documents, audio files, or videos. Free video creation software, such as Windows Movie Maker, allow any individual to create and upload a fully edited, semi-professional movie to the Internet in the comfort of their own home. YouTube is a prime example of this type of opportunity. Amateurs are able to create multimedia and showcase it to an audience; this process taking, at times, no more than an hour. So how will these new multimedia literacies be defined?

Article 10
Hobbs, R. (2006). Multiple visions of multimedia literacy: Emerging areas of synthesis. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 15-28).
Hobbs, first of all, discussed the impact of “screen activity” on American children and teens, who spend, “an average of eight hours per day using media, including television, videogames, the Internet, newspapers, magazines, films, radio, recorded music, and books” (p. 15; as cited in Kaiser Family Foundation, 2001). In a world constantly bombarded with new and changing literacies, children need to be able to find and critique media messages. Educators traditionally rely on textual and language competencies. However, Hobbs noted that it is also essential for students to learn to use symbol systems (images, music, sound, motion) as a means of expression and communication. Literacy educators are beginning to recognize that they need to teach students how to read and respond to the array of media technologies in order to prepare them for the 21st century.
Educators no longer own the concept of literacy; academic scholars from a wide variety of disciplines (media studies, psychology, cultural anthropology, communications, history, library and information science, literary theory, linguistics, rhetoric, etc.) have become increasingly interested in how individuals make meaning in reading and composing multimedia texts. As a result, educators are using new literacies terminology, such as: visual literacy, media literacy, critical literacy, information literacy, and technology literacy.
Visual literacy, a field based on nearly 100 years of work by interdisciplinary scholars, has long discussed the importance of visual materials and concepts (such as selection, framing, composition, sequence, and aesthetic dimensions of images) in the classroom. Scholars interested in visual literacy have examined how images are interpreted and understood, how images and text interact in meaning-making, how exposure to visual images affects cognitive development, and how semiotic dimensions can be examined. Learning about the visual conventions of images helps give “readers” a way to analyze texts and “creators” some strategies to enhance their own productions. Texts are only representations of reality and key visual grammars exist cross-culturally in the creation of these texts.
Information literacy has been defined by the American Library Association as the abilities an individual needs to recognize when they need information and who to locate, evaluate and use it. In many instances, however, when information literacy is actually taught in the schools, it is defined as a narrow checklist of specific skills; not as a more critical analysis, a multiplicity of comprehension techniques.
Media literacy educators in the United States have been influenced by the work of British, Canadian, and Australian scholars who have discussed engaging educational practices that teach children to analyze mass media and popular culture. Good media literacy pedagogy stresses: the process of inquiry and situated active learning based on the work of Freire and Macedo.
Critical Literacy arose from traditions in semiotics and cultural studies. Meaning making in a critical literacy arena, involves the social, historical, and political contexts combined with the author’s meaning. Critical, as used by scholars, refers to the recognition of oppression and exploitation embedded in texts. Critical literacy scholars “ explore reading within a sociocultural context [and] examine and understand how various texts, including pictures, icons, and electronic messages (as forms of symbolic expression) are used to influence, persuade, and control people” (p. 19).
After defining the new literacies, Hobbs discussed a model for integrating the conceptual tenets of multimedia literacies. School practices must change to incorporate themes of authors and audiences, meanings and messages, and representations and reality. Although some evidence is emerging from research on multimedia literacies, Hobbs noted that most examinations have looked at a small number of students in a single classroom (Alvermann, Moon, & Hagood, 2001; Anderson, 1983). Some have explored whether students learn the appropriate facts through multimedia (Baron, 1985; Kellly, Bunter & Kelly, 1985) or if a video broadcast affects cognitive or critical literacy skills (Vooijs & Van de Voort, 1993). Recently, some case studies have documented educators’ practices in classrooms (Hart & Suss, 2004; Hart, 1998; Hurrell, 2001;Kist, 2000—note to self, also include Leu book here).
Hobbs noted that further research should continue to explore how and why multiliteracies are incorporated into classroom practices. Furthermore, she noted that educators must be responsive to Masterman (1985) who identified a central outcome for media education: the ability to apply skills and strategies learned in the classroom to everyday life. Such work depends on teachers who have the initiative, creativity, imagination and perseverance to enable students “to develop the competencies they need to be citizens of an information age” (p. 25).

Article 11
Royce, T.D. (2007). Intersemiotic complementarity: A framework for multimodal discourse analysis. In T. D. Royce & W. L. Bowcher (Eds.). New directions in the analysis of multimodal discourse. (pp. 63-109).
Royce noted that the theoretical foundation for multimodal discourse analysis is derived from systemic Functional Linguistic (SFL) view of language as a ‘social semiotic’ (Halliday, 1978). Halliday made four central claims about language: it is functional in terms of what it can do or what can be done with it, it is semantic in that it can be used to make meaning, it is contextual, in that meanings are affected by social and cultural situations, and semiotic in that it is a process of selecting from “the total set of options that constitute what can be meant” (Halliday, 1978, 1985, p. 53). Halliday also identified three types of meanings, which are “metafunctions” that operate simultaneously in the semantics of every language: the ideational metafunction (responsible for “the representation of experience”); the interpersonal metafunction (meaning as a form of action); and the textual metafunction (maintaining relevance to the context). Reading or viewing involves the simultaneous interplay of three elements, which correlate with the metafunctions: represented participants (elements that are actually present in the visual), interactive participants (participants interacting with each other in the act of reading—graphic designer and reader), and the visuals’ coherent structural elements (compositional features such as element of design or layout).
Royce provides a detailed analysis based the intertextuality of these factors on page 68 and 69. The interpretation deals with how visual and verbal modes interact “intersemiotically” with the: identification of participants, represented processes or activities, the circumstances, and the attributes. Teach of these aspects can be discussed in terms of Visual Message Elements. Royce further explained that the same ways that metafunction concepts can be applied to visual modes of communication, so can the analysis of cohension in text by Halliday and Hasan (1985) be used to “explicate the ideational cohesive relations between the modes in a multimodal text” For this purpose, Royce used the following sense relations: Repetition (R) for the repetition of experiential meaning; Synonymy (S) for a similar meaning; Antonymy (A) for an oppositie meaning; Hyphonymy (H) for a general class of something and subclasses; and Meronymy (M) for reference to the whole of something and its parts; and Collocation (C) for words that tend to occur in various subjects (Halliday, 1985).
Furthermore, the examination of the intersemiotic interpersonal features of a multimodal text looks at relations between the visual and viewer and how they are represented (Kress & van Leeuwen, 1996). This can be very important in terms of speech functions as distinguished by Halliday (1985): offer, command, statement, and question. Visual type can also be important, as can the level of involvement by a viewer which is recognized by visual angle or point of view. Power relations between viewers and the represented participants are encoded in the angle between them. The degree of social distance is realized by the size of frame: close up, medium, and long shot. These different kinds of shots parallel the distances people use when they talk face to face (Kress and van Leeuwen, 1990). Relationships can occur when the interpersonal meanings in both visual and verbal modes co-occurring on the same page, related through reinforcement of address and through intersemiotic attitudinal congruence and attitudinal dissonance (modality) relations. Furthermore, relationships can also occur when the compositional meanings are integrated by design features, such as: information value, salience, visual framing, visual synonymy, and potential reading paths.

1 comment:

Jim said...

I am repsonding to the the several posts above in an order somewhat like they were posted. It is my reponse to Debbie's responses with the intent of marking things that we can use;

Collier: seems similar to Rose regarding the good collection. The elements that C writes about were also mentioned in R's compostional analysis. C. also lists a method of steps.

Bell's methods look excruxiating correct. We might need this.

Callow: for the study with 5th and 6th that he describes, I can see us doing more extensive interviewing of teh film producers (as auteurs) in teh fall. Would that kind of follow up be allowed? Would it even be possible? If so we could video the interview and then splice in their commentary or have an audio voice over, or a PIP
Maybe the format should be a combination of semi-structured interview and sitmulated recall with specially selected video clips as the stimuli.

Kress and van L: provide a vocab for discussing and nayzing the image. Is this to be our (D)iscourse?

Messaris: Just because they know how to do it doe not mean that they know what they are doing. We need to be explicit, notice when there is evidence of the kids using a media strategy, point it out, say it is good, and label it. Mesarris argues from a consumption of images (reading) perspective in order to make a case for production of media products. It is a point that we have already realized in our work. So we could push the argument of Messaris a bit further and claim that even in the production of media, the embedded strategies must be brought to light explicitly and labelled so that the likelihood that they are used again is increased. I think that this is "teaching."

Lemke: is making a case fo multimodal discourses, by arguing (against our common undertanding) that even reading and writing are semiotically ralized as multimodal. Do we need to make this same arguement? I think that this type of persuasion on these issues is dated at this point. what do you think? Can we simply (?) assume this much and get on with it. Like "Lemke makes an effective argument of teh legitimacy of multimodal discourse...'

Royce: the analytic framework here both terminology and practice may help us out. It occure to me that it might be a good exercise to make a gridchartlike we did for the crit lit study to compare the differing methods in tyhe various apporaches. I'm thinking of a retrieval chart with the different studies running down the right side. it would have columns headed by such things as: data, unit of analysis, theorectical framework, purpose, etc.