Wednesday, June 13, 2007

Two more chapter summaries from Handbook of Visual Analysis

Collier is a more qualitative approach and the next one is a more qualitqative reductionist approach.
Collier, M. (2001). Approaches to analysis in visual anthropology. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
“analysis of visual records of human experience is a search for pattern and meaning, complicated and enriched by our inescapable role as participants in that experience.” P. 35.
The importance of all of the elements of an image…visual field contains a complex range of phenomena. Responsibly address many aspects of images, and “recognizing that the search for meaning and significance does not end in singular ‘facts’ or ‘truths’ butr rather produces one or more viewpoints on human circumstances,k and that while ‘reality’ may be elusive, ‘error’ is readily achieved.” P. 36
Analysis and the importance of contextual information. Making fgood research collections…good documentary photos are different from “good quality” photography. A good documentary is often presented a sa single image divorced from the larger context (my note: with digital photography, you can do both…take the wider photo and zoom in on the particulars).
A good research collection: carefully made with careful and comprehensive temporal, spatial, and other contextual recording, good annotation, collection of associated information and maintenance of this information in an organized data file.
DIRECT ANALYSIS:
“Any major analysis should begin and end with open-ended processes, with more structured investigation taking place during the mid-section of this circular journey” p. 39
The model, adapted from Collier and Collier (1986) outlines a structure for working with images.
1. first stage: observe data as a whole. Look at and listen to overtones and subtleties to discover connecting and contrasting patterns. Trust feelings and impressions. Take notes and identify images which they are a response to. Write down all questions the images trigger in your mind…these may be good for future analysis. See and respond to photos as a statement of cultural drama. Let these characterizations form a structure within which to place the remainder of your research.
2. second stage: make inventory or log of all your images. Design inventory around categories that reflect and assist research goals.
3. third stage: structure analysis. Quantitative:Go through evidence with specific questions…measure, distance, count, compare. The statistical information can be plotted on graphs, listed on tables, or entered into a computer for statistical analysis. Qualitative: produce detailed descriptions.
4. fourth stage: search for meaning significance by regturning to the complete visual record. Respond again to the data in an open manner. Re-establish context, lay out photos, view images in entirety, andthen write your conclusions as influenced by final exposure to the whole.


Jewitt, C., & Oyama, R. (2001). Visual meaning: A social semiotic approach. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
The term ‘resource’ is one of the key differences between social semiotic and Paris school structuralist semiotics


















Article 14
Bell, P. (2001). Content Analysis of Visual Images. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
This chapter deals with explicit quantifiable analysis of visual content as a research method. Content analysis is one of the most widely cited kinds of evidence in Media studies.
Begin: Content analysis begins with some precise hypothesis or question about well-defined variables. (my note: These variables should include a well defined description of the media and the modes.)
Hypotheses: which content analysis usually evauate are comparative. Researchers are usually interested in whether, say women and men are depicted more or less frequently. “content analysis is used to test explicitly comparative hypotheses by means of quantification of categories of manifest content.” P. 13
“visual content analysis is a systematic, observational method used for testing hypotheses about the ways in which the media represents people, events, situations, and so on. It allows quantification of samples of observable content classified into distinct categories. It does not analysi individual images or individual ‘visual texts’ (compared with psychoanalytical analysis (ch. 6) and semiotic methods (ch 4, 7,9). Insteaad, it allows description of fields of visual representation by describing the constituents of one or more defined areas of representation, periods or types of images.”
Typical research questions:
1. Questions of priority/salience of media content: how visible (how frequently, how large, in hwat order in a programme) different kinds of images, stories events are represented?
2. Questions of ‘bias’ comparative questions about the duration, frequency, priority or salience of representations of, say, political personalities, issues, policies, or of ‘positive’ versus ‘negative’ features of representation.
3. Historical changes in modes of representation of representation of for example, gender, occupational, class, or ethically codified images in particular types of publications or television genres.
What to analyse: ‘items’ and ‘texts’
The content can be visual, verbal, graphic, oral…A visual display as text, an advertisement as text, a news item as text…because”it has a clear frame or boundary within which the various elements of sound and image ‘cohere’, ‘make sense’ or are cohesive.’ (p. 15) texts are defined within the context of a particular research question and within the theoretical categories of the medium (television, internet) and within the genres (book, portraits, news, soap operas) on which the research focuses.
Visual content analysis isolates framed images or sequences of representation. Unlike semiotic analysis, content analysis classifies all the texts on sepecified dimensions. It is not concerned with “reading’ or interpreting each text individually. Semiotic analysis is qualitative and focuses on each text or genre in the way a critic focuses on meaning.
Analysis
• Variables: a content variable is any such dimension (size, colour, range, position on an page) or any range of options that can be substituted (ie male/female) or a number of alternative settings (kitchen, bathroom, bedroom, etc). Variables like size, represented participants, settings, priority, duration, and depicted role. Content analysis, a varieable refers to aspects of how something is represented not ‘reality’.
• Values: the values are categories and should be mutually exclusive and exhaustive. Use a coding scheme and look for themes. Visual content a
Variables
Gender Role Setting Size

Values male House duties school full situation
nurse group Partial group
female executive inside
teacher outside
• Alternatively, you could rank a duration of content emphasis in rank order. (for example in the video newscasts, you could rank the amount of time spent in a variety of roles, in types of newscast situations, using props, etc)
Quantitative results: comparisons and cross-tabulations
Compare by gender or visual modality which relies to the ‘truth value’ or creidibility of statements aobut the world (kress and vanleeuwen, 1996). Visual images also ‘represent people, places, and things a sthough they are real…or as though they are imaginings, fantasies, caricatures, etc.’ (kress and van leeuwen, 1996, p. 161). The book gave an example of a table cross-tabulating defined values of modality cross tabulated by gender. The modalities chosen were standard, factual, fantasy. (in the newscasts, we could depict the types of character, such as newscaster, interviewee, movie star, sports star, etc. and cross them by gender).
Reliability:
“degree of consistency shown by one or more coders in classifying content according to defined values on specific variables.” P. 21 inter-coder reliability (two coders) or intra-coder reliability (one coder, different occasions)
• Measuring realiatility: define variables clearly and precidesley and ensure that all coders understand these definitions in the same way.
• Train coders in applying defined criteria for each value and variable
• Measure the inter-coder consistency with which two or more coders apply criteria.
If only one coder is to be emplyed a pilot sutudy should be conducted to measure intra-coder reliability. Have coder classify 50-100 examples on all relevant variables. Correlate two sets of classifications. Use the following methods:
1. Per cent agreement: calculate how frequently two coders agree on judgements. 90 percent is recommended with two. Less than ten percent of items should fall into the “other” category. The fewer values there area on a given variable, the more likely there is to be agreement between coders based on chance.
2. Pi: a more sensitive measure of reliability. Pi= (percent observed agreement) – (percent expected agreement)/ (1 percent expected agreement). The expected frequency=sum of the squares of the expected frequency values. See page 23

Limitations: the main limitation: “the relatively untheorized concepts of messages, texts or manifest content that it claims to analyze objectively and then quantify” pg. 24 Categories of visual content usually quantified arise from commonsense social categories. Such variables are not defined in any particular theoretical context (however, what about visual analysis of websites or slide show presentations. If I use defined categories based on kress and vanleeuwen or on allessi and trollip or on Callow does this make my categories more valid?) Other limitations include:
• Marsxist and neomarxist theory…Adorno has quipped that ‘culture’ cannot be defined as quantifiable.
• Other critics cite bias
• Culturally complex and hard to quantify
• Stuart Hall (1980) violent incidents in cinematic genres are only meaningful to audiences who know the genres’ respective codes. (story structure, thematic elements, plot, character—must know the genre)
• Winston (1990) discussed ‘inference’ problems. content analysis cannot be compared with an assumed reality. Is it true or false? Is there a bias? Is it a positive or negative representation?
• Generalizing from content analysis results can be difficult. Sometimes it is assumed that users understand or are affected by media in similar ways.
• Visual representations raise further theoretical proble3ms of analysis. Many highly coded conventional genres of imagery have become media clichés. “to quantify such examples is to imply that the greater their frequency, the greater their importance.” Yet the easy legitibility of clichés makes them no more than short-hand stereotypical elements for most viewers who may not understand them in the way that the codes devixzed by a researcher imply. (p. 25)” (however, in our news media study, we are looking for appropriation of media elements and iconic representations that children take from the real world and use to “play” with “textual toys” (dyson)—how does this fit in? So our case is special. With children we are looking for these types of representations…but how?)
Validity: Going beyond the data. “to conduct a content analysis is to try to describe salient aspects of how a group of texts represents some kinds of people, processes, events, and/or interrelationships between or amongst these. However, the explicit definition and quantification that contentent analysis involes are no guarantee in themselves, that one can make valid inferenced from the data yielded by such a n empirical procedure. This si because each content analysis implicitly (or sometimes explicitly) breaks up the field of representations that it analyses into theoretically defined variables. In this way it is like any other kind of visual or textual analysis. Semiotics posits as semantically significant variables such as ‘modality’ or ‘represnted participants’ or conceptual versus narrative image elements” (p. 25).
Ask: Does the analysis yield statements that are meaningful to those who abiltually ‘read’ or ‘use’ the images?
The criticism most often leveled against content analysis is that the variables/values are somehow only spuriously objective.
Validity referes to the concept of how well a system of analysis actually measures whata it purports to measure. “ valid inferences from particular content analysis will reflect the degree of reliability in the coding procedures, the precision and clarity of definitions adopted and the adequacy of the theoretical concepts on which the coding criteria are based.” (p. 26).

vanLeeuwen-analyzing visual texts through iconography

This is a summary of one of hte chapters from Handbook of Visual Analysis. I think we should further explore the notion of visual semiotics and iconography. Also, as noted in Rose (the chapter on semiotic analysis) I think we should further investigate Barthes notion of "mythology". I think that Barthes' mythology is a good way to think of our memes...citing Rose: myth is thus a form of ideoilogy...but the myth is believable precisely becaue form does not entirely replace meaning...the interpretation of mythologies requires a broad understanding of a culture's dynamics". Therefore, like memes, in terms of information literacy, the more you know, the more you see. The more you know, the more you see, the more you can make interesting meaning. One other really intersting notion of Barthes is htat he notes: "myth is not defined by the object of its mesage, but by the way in which it utters that message: there are formal litmits to myth, there are no substantial ones" (pl 117) Myth is a "second order semiological system" (p. 123) This is a double order meaning system. Individuals who are visually and media literate will be able to interpret the secondogical system" Myth builds on first order signs...with a signifier and signified. However, the denotative sign becomes a signifier at hte second or mythological Or memetic level of meaning. at this second level the signifier is accompanied by its own signified. The first level of meaning is the form, the sifgnified a concept. But at htis second level, at the level of myth or memetic meaning, this is signification. When image becomes form, the richness of the image is left behind and the gap is filled iwth signification. Myth makes us forget that things were and are made but naturalizes the way things are.(rose, p. 91). Therefore, when we insert memes into movies, we are constucting virtual realities beyond the first level meaning of the simple form. Additionally, using Rosenblatt's theory of transaction between reader and text, these meanings are derived through personal experience and the interaction between reader and text. Also, the meanings change based on the school-based literacy and other literacies of individuals. For example, people well versed in pop culture will find more meaning in certain types of media.
Here is the chapter summary...
Van Leeuwen, T. (2001). Semiotics and iconography. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
In this book chapter, van Leeuwen discusses two approaches to visual analysis: visual semiotics of Roland Barthes (1973, 1977) and iconography. He began by discussing how the two approaches search for the meaning of representation and the question of the hidden meanings of images. However, while the Barthes notion studies the image, treating cultural meanings as a given currency shared by all in a particular culture; iconography also attends to the context and how and why cultural meanings came about historically.
In Barthes’ semiotics, the key is the layering of meaning; the first meaning is the denotation (who or what is depicted) and the second layer is the connotation (what ideas and values are expressed through what is represented and how it is represented). For Barthes, denotation is relatively simple. Perceiving photographs is close to perceiving reality because they provide a point by point reference in terms of denotation. The first layer of interpretation is to simply recognize what we already know. Although denotation is partially “up to the eye of the beholder” it also depends on the context. These pointers relate to problems inherent in a Barthian description of visual denotation, factors which can change the meaning: categorization (including the use of captions); groups vs individuals (can have a similar effect); distancing (zooming); and surrounding text (or pictures).
The second layer of meaning, according to Barthes, is connotation—the layer of ideas and values, what things ‘stand for’ or ‘are signs of’. According to Barthes, this idea is already established as part of a cultural norm. For example, specific photographic techniques (zoom, shutter speed, effects) have been defined by Barthes as ‘myths’ in that they are first very broad concepts but they link together everything associated with a single entity. These are also ideological meanings, serving the status quo or the interests of those in power. Barthes further described the unwritten ‘dictionary’ of poses that can color meanings; he described the posing of objects where meaning comes from the object photographed as a ‘lexicon’. However, the specific parts of images are not simply a series of discontinuous ‘dictionary entries” but Barthes also reads them together as a “discursive reading of object-signs’ (1977, p. 24). Therefore, there is a ‘syntax’ because the ‘signifier of connotation is no longer to be found at the level of any one of the feragments of the sequence but at that…of the concatenation’ (1977, p. 24).
Connotation can also come through the style of artwork or photogenia, the techniques of photography such as ‘framing, distance, lighting, focus, speed’ (1977, p. 44). Some analytical categories, such as social distance, point of view, and modality fall under this category. Van Leeuwen provides an example of a visual qualitative and quantitative analysis on page 98-99.
Iconography, the second form of analysis, utilizes three layers of image meaning: representational meaning, iconographical symbolism, and iconological symbolism. ‘Representaitonal meaning’ is close to ‘denotation’ in that it is the recognition of what is represented on the basis of our past experience and prior knowledge (Panofsky, 1970). ‘Iconographical symbolism, the ‘object-signs’not only denote a particular object but also the ideas or concepts attached to it. Panofsky called it ‘secondary or conventional subject matter’ (1970). Conventions of the past are more recognizable than developing conventions. ‘Iconological symbolism’ is what could be called ideological meaning or what Panofsky explained, “to ascertain those underlying principles which reveal the basic attitude of a nation, a period, a class, a religion, or a philosophical persuasion’ (1970, 55).
Van Leeuwen also discriminated between Barthian visual semiotics and iconography in that iconography uses both textual analysis and contextual research. Representational meaning is determined by the following: the title indicates who or what is represented. The identification of the represented can be done on the basis of personal experience, on the basis of background research, through reference to other pictures, or on the basis of verbal descriptions.
In terms of symbolism, van Leeuwen distinguished between abstract symbols (abstract shapes with meaning, like crosses) and figurative symbols (represented people, places or things with symbolic value). Figurative symbols are often seen as natural. Additionally textual and contextual arguments are used that give ‘pointers’ to tell viewers how to interpret an image. Hermeren (1969) discussed four kinds of pointers: a) the symbolic image is presented with more than normal care and detail or given a prominent position, or made more conspicuous through lighting, tone, color, etc.; b) someone in the picture points at the image and gestures; c) the motif seems out of place; or d) the motif contravenes the laws of nature. Moving from iconographical to iconological symbolism, we move from identifying conventional associated meanings to interpretation. These interpretations depend on ‘something more than a familiarity with specific themes or concepts as transmitted through literary sources’. Instead, it requires ‘a mental faculty comparable to that of the diagnotisician—a vaculty which I cannot describe better than by the rather discredited term “synthetic intuition” ‘ (1970, p. 64).
Both methods of interpretation provide arguments for using representational elements such as: poses and objects and elements of style (angle, focus, lighting). Both systems recognize that symbolism may be open or disguised.
Here are summaries of the key points of hte following books. First I summarized the gramar of visual design by Kress and vanLeeuwen and then I summarized key concepts in their Multimodality book. One important concept that Kress aludes to in both books as well as in literacy in the new media age is the concept of "readng path". I've been thinking about reading path in terms of how the students are reading films in the camp. It seems as if there are multiple paths. For example, students read temporally. They also read the images as spatial. They also look for changes as the images move. They also look for changes in expressions or uses of specific shots to depict meaning. Hmmm. What does this mean?

Kress, G., & van Leeuwen, T. (2006). Reading Images: The Grammar of Visual Design: Second Edition. New York: Routledge.
In Reading Images, Kress and van Leeuwen provided a model for visual grammar. A grammar, they pointed out, is an inventory of observed regularities, used as a means of representation; not just a delineation of rules and regulations of normative correctness. They noted that the grammar of verbal texts and visual images have developed side by side. These similarities, however, should not lead one to expect a specific grammar for visual images as is found for linguistic texts.
Kress & van Leeuwen further discuss ground their theory in systemic functional linguistics. Here are some of the main features of the theory of visual grammar:
1) Narrative in visual representations: A vector is needed to make a proposition in visual media. A vector is a line or implied line that suggests direction. Elements of a composition are called ‘participants’, the participant from which a vector departs is an ‘actor’, and the arrival point is the ‘goal’. The meaning is a transaction; if this meaning is reversible it is called an interactive transaction. The geometrics of such relationships are sources of meaning. Lack of a clear ‘reading path’ can lead to ambiguity. A summary of realizations can be found on page 74-75.
2) Conceptual Representations:
• Classification: Symbolism or shape can be added to these diagrams. Vectors can also be evident in diagrams such as flow charts
• Analytic: Relation of a whole and parts that give ‘possessive attributes’ to the whole. Obvious examples are bar charts and circuit drawings, but portraits can also be structured in this way
3) Representation and interaction: The first direct gaze from the representation of a human out to a viewer is attributed to Van Eyck (1433). This is a power relationship, a powerful way of addressing the viewer—the direct gaze. Other directions of gaze are symbolic. If the subject is looking up, the subject is inferior. If the subject is looking down, she is superior. A level gaze denotes equality.
4) Modality: we prioritize an image by modality markers embedded within. In the west, high modality is signified by realism (truth). In other cultures, it may be more symbolic (religious). Markers of realism can be: detail, depth, quality of material, illumination, color, and craft design skill. Different areas of culture and different “subject” area discourses may have different coding orientations. For example, in areas of science, the modality code is the blueprint; whereas, in advertising, the modality code may require bright colors. In art, modality becomes a play of signifiers; complex and often esoteric relations between modality markers often provide an intertextual high modality. Modality is also conveyed by authenticity.
5) Composition: provides an integration through symbolic meanings of position, weight, and framing. Realizations include: centered, polarized, triptych, circular, margin, mediator, given, new, ideal, real, salience, disconnection, connection.
• Left and right denote the ‘given’ and the ‘new’ due to the broad convention in the West that relates to our custom of reading left to right. The eye tends to start at the left of the image and move right. (note: this is often different with pre-readers—salience plays a role in their visual literacy).
• Top and bottom denote ideal and real, promise and product, emotive and practical, head and foot.
• The center is the place of the ruler, harmony and symmetry. In western art and graphic design, the use of a geometrically centered image is considered naïve.
• Weight includes: size, focus, contrast, and foregrounding. The weightings of these aspects of image have a center of gravity.
• Framing: may be explicit or implied. Lack of framing implies a group identity; whereas, framing individuates.
• Rhythm: in film—time based image. In a book, flicking through a page. Rhythm in multimedia could also refer to zooms, pans, and transitions in a sequence of time.
• Salience is the degree to which an element draws attention to itself due to: size, place, overlapping of elements (color, tone, sharpness, definition,etc.).
• Connection/disconnection: the degree to which element is connected or visually separated through framing, empty space, vectors and differences/ similarities in color, shape.
6) Materiality and Meaning: Inscription: Brush strokes…also the hand made marks, marks recorded with technology, and marks synthesized in technology. Color is a semiotic mode, which carries meanings of its own (including cross cultural variations). Color also has emotions attached with it. Additionally, in certain arenas, color has textual connotations (blue text on computer symbolizes hyperlink). Color coordination can promote cohesion. Distinctive features to the semiotics of color include: value, saturation, purity, modulation, differentiation, and hue. Finally, color schemes can provide significant design qualities.













Article 4
Kress, G., & van Leeuwen, T. (2001). Multimodal Discourse: The Modes and Media of Contemporary Communication. New York: Oxford University Press..
In Multimodal Discourse, Kress and van Leeuwen outline a theory of communication for the age of interactive multimedia. Beginning with the concept of ‘design’ they outline an approach to social discourse where color and font play a role equal to language. They defined multimodality as the “use of several semiotic modes in the design of a semiotic product or event, together with the particular way in which these modes are combined—they may for instance reinforce each other (say the same thing in different ways), fulfill complementary roles…or be hierarchically ordered” (eg. action films where action is dominant and music adds to the presence). Furthermore, they articulated communication as a “process in which a semiotic product or event is both articulated or produced and interpreted or used” (p. 111).
In the final chapter, where they delineate a multimodal theory of communication, which concentrates on two things: a) the semiotic resources of communication (modes and media); and b) the communicative practices in which resources are used (discursive, interpretive, production, design, and/or distribution practices). They key point they made is that meaning is made “not only with a multiplicity of semiotic resources, in a multiplicity of modes and media but also at different ‘places’ within each of these” (p. 111).
One of the key elements in the novelty of multimedia discourse is the as pect of design. Discourses can be realized in different modes; each mode adds layers of meaning. Design consists of a ‘blueprint”, an overall spatial schema of a page with bits of information. This could also be used in connection with other modes (text, color, spatial arrangement, font, etc.). Therefore, on a multimodal “page” information is spatially, rather than sequentially organized. Spatial order—where elements are placed, how salient they are, in which ways they are framed, how they are connected, color harmony/disharmony—becomes a key aspect to the visual schema. Unlike a traditional text, where the reader follows a sequential order, in a visual text, the importance is suggested by hierarchies of salience.
Further elements of design are related to production and distribution. For example, the way in which separate bits of information are produced (with boxes on pages—such as a website) adds to the meaning. In this way, typography also becomes significant. The use of a handwriting font depicts a personal message, something that has become conventional. Using the premise of “Provenance” --“the idea that signs may be imported from one context into another in order to signify the ideas and values associated with that other context by those who do the importing” (p. 23)--the use of handwriting is a sign of personal address that has become conventional. However, it has not been grammaticalised; typography is still ‘lexical’ and works through connotation. Therefore, the meaning of the font is different than the meaning of the actual text, which follows grammatical rules.

Essential Articles

Here are my summaries of some essential articles for our research
deb
Callow, J. (2003, April). Talking about visual texts with students. Reading Online, 6(8), Available: http://www.readingonline.org/articles/art_index.asp?HREF=callow/index.html
Using the multiliteracies visual design concepts of Kress and van Leeuwen (1996) Callow investigated what metalanguage students used when talking about visual aspects of their multimedia texts.
Two Australian teachers, each working with 25 6th grade (11-year old) students participated in the study. The science and English curriculum were combined with the school’s computer technology program to create the six week unit of study. The context of the study consisted of students investigating food production and working together to create PowerPoint slide presentations, which integrated text and image. Working in groups of four or five, researchers provided students with several facts. Students combined the facts; paraphrased; sequenced the information; and included text, images, sound, and animation as part of their multimedia presentations.
Researchers used a qualitative approach (Merriam, 1998). Sources of data included field notes, discussions with teachers, collection of work samples, and group interviews with students about their work. Discussions with students included their comments on the features of: image, color, selection, salience, and layout. In addition, the researchers also asked the students evaluative questions about the effectiveness of their use of the visual features in presentations. Student perceptions of what qualities made a good slide show—including features of color, selection, image, salience, and layout—were the main criteria for evaluating presentations.
When asked what makes an effective PowerPoint, the students noted, intuitively: color (15 students), animation (10 students), sounds (8 students), text features (7 students), backgrounds (6 students), and pictures (5 students). However, when asked why they chose a particular element, few students were able to express specific reasons for their choice. Interestingly, students decided that photographs and clipart would be effective in different circumstances. They noted that photographs were “more realistic” and denoted a serious tone; more effective for adults. Clipart, on the other hand, would be an effective visual for younger children or a less serious tone.
In terms of metalanguage, students discussed many features of design in terms of comparing their work to books or other visually enhanced texts. Although the students were unable to discuss the elements in terms of a specific metalanguage, they were able to justify whey they made particular choices.
The strength of this article is that it investigates an issue essential to students competing in a technological global economy: the creation of effective presentations. Although written texts remain important means of communication, final presentations in businesses increasingly include multimodal “texts.”
I also found weaknesses in this article. First of all, it would have been helpful to see more examples of student work or vignettes that detailed a presentation. In addition, the researcher noted the nature of PowerPoint as linear in nature as opposed to weblike. However, using PowerPoint to create a museum kiosk-like presentation, students can easily add hyperlinks with buttons within the show, between shows, and with online documents. Perhaps at the time of the study, the version of PowerPoint did not include these features. On the other hand, few people are familiar with the interactive features of PowerPoint, including action buttons and custom animation.
I found the verbal reports of effectiveness compelling—so compelling that I plan to use this article as a major element in my dissertation. Another strength noted was that this study was easy to follow and included detailed descriptions of the presentations. However, I would have liked to see more details about the actual creation of PowerPoint process.
Implications of this article include the fact that when working with visual and multimodal texts, students need to understand technical skills, but also how these elements create meaning. In particular, they must understand how the features of color, salience, images, and layout design impact the effectiveness of the presentation. Educators need to understand the use and meaning-creation potential of these features.
Integration of multiliteracies in new learning environments is a new and exciting concept; one I intend to study in detail over the next couple of years. With the advent of social networking sites and video sharing (YouTube) anyone can publish a multimedia message. No longer are elements of design strictly in the hands of the professionals; amateurs can use simple design tools to create their own messages. Schools must keep in touch with reality literacy. What types of literacies are effective now and what types of literacies will be effective in the future?




Article 2
Semali, L., & Fueyo, J. (2001, December/January). Transmediation as a metaphor for new literacies in multimedia classrooms. Reading Online, 5(5). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=semali2/index.html

This research used a case study approach to investigate transmediation in terms of exemplars noted in the classroom situation.
In the article, the authors first defined key terms:
• Multiple sign systems: art, movement, sculpture, dance, music, words, digital, and multimedia.
• Transmediation: responding to cultural texts in a range of multiple sign systems.
• New literacies: (the ability to read, analyze, interpret, evaluate, and produce communication in a variety of textual environments and multiple sign systems” (p. 1).
Then, following a well-developed literature review, the authors discussed their central concerns:
• “What is the relationship between what students know and the signs they encounter in their classrooms (about race, class, gender, disability, and sexual orientation)?
• What meaning do they make of these semiotic systems in their literacy practices?” (p. 3).
The authors provided some detailed cases, which illustrated exemplars of transmediation activities. Then they discussed the first scenario in terms of semiotics. However, when I turned from page five to page six, I thought a major part of the article was missing. The authors simply noted, “equally, the other scenarios aim to open our eyes to a variety of symbolisms, codes, and conventions…” They failed to analyze the other scenarios in terms of sign systems, transmediation, and new literacies. What began as a very exciting article, fell short of satisfying my desire to learn more about how the real-life cases related to background theory.
Despite the weaknesses in the analysis, I found the format clear and easy to follow. I plan to use a similar format to write up results for a qualitative study on multimedia creations.
Article 5
Muffoletto, R. (2001, March). An inquiry into the nature of Uncle Joe’s representation and meaning. Reading Online, 4(8). Available http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/muffoletto/index.html
In this article, Muffoletto addressed critical or reflective visual literacy. In terms of visual literacy, Muffoletto noted that a diversity of meanings has traditionally been devalued in classroom settings. Reflective visual literacy empowers students to understand the power of the image and to evaluate images based on their personal experiences. Comprehending the process of reflective visual literacy is only possible if teachers incorporate the notion of multiple perspectives into their daily teaching. Using photo essays, students should be allowed to express their own voices and describe their own perceptions of how the image reflected their experience. The ultimate power of reflective visual literacy is that it situates visual representations and their interpretation (construction of meaning) in a context that raises issues about benefit and power.
Muffoletto provided an extensive discussion of how individuals perceive image as text. Images—and our perceptions thereof—are not natural. We see what our eye and brain let us see. We experience the world through a reality that has been constructed for us through social and biological limitations. “Like texts, visual representations (visual texts) are the result off ideologically formed intentional acts…the visual text, as a representation that stands in place of an object or concept, requires a social codification—the construction of meaning through a system of codes used by the author and reconstructed by the reader.”
Muffoletto discussed the “fluid representational nature of icons, signs, and symbols” he found in photographs. At one moment the picture is an icon (this is a picture of…), a sign (usually I associate this with…), and a symbol (more complex associations. Meanings are assigned to the image by individuals who are members of historical social communities (Fish 1980)—including gender, race, religion, cultural, economic perspectives.
Muffoletto further grounded the concept of visual literacy within Semiotics, the study of signs, which could be a useful tool for understanding social and historical construction of meaning. Semoitics positions representation from three perspectives: icons, signs, and symbols. An Icon, he noted, is a representation with a strong perceptual relationship and the object for which it stands (Barthes, 1964). Signs are conventions—“agreed upon abstractions that we associate with some thing or concept.” Letters, colors, shapes, or images on a screen mean nothing by themselves. We need to organize and assign meaning to them. Symbols (Langer, 1976) are instruments of thought; they work differently from icons and signs because rather than corresponding directly to objects/concepts, we use conceptual frameworks. For example, a star, or image of star may refer to a religion, but it also symbolizes all the particular religion stands for (Wollen, 1969).
Reading implies intention to construct meaning. From modernist perspective the meaning of the texts lies within the text itself, placed there by the author. The role of the reader is to find the truth. From a postmodern perspective, the meaning is a result of interaction between reader and text. The meaning is constructed by author and reader. Muffoletto stated that constructing meaning can be seen through two different lenses: politics and pedagogy: Traditionally teachers have been responsible to give the “official” truth or meaning of texts. Standardized tests emphasize this. Diversity of meaning is devalued. These practices are a result of seeing through only one lens. Reflective and critical analysis practices allow a democratic reconstruction of images.
The principles of critical visual literacy are essential in an ever-increasingly visual world faced by children in their out-of-school literacy contexts. Muffoletto noted that the foundations of reflective visual literacy require that students value the differences of understanding and expression involved with the construction and deconstruction of all texts as social products. Furthermore, as technology changes, our understanding of “reality” changes. Muffoletto stated that educators must consider new literacies in terms of power relationships and how meaning is constructed.
Article 8
Messaris, P. (2001). New literacies in action: Visual education. Reading Online, 4(7). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/action/messaris/index.html
Messaris, a leading researcher and theorist in the area of visual literacy, argued for a deepening of visual literacy education beyond a critical analysis of visual texts. He noted that the process of creating visual images contributes more to students’ understanding of the multiplicity of visual information to which they are exposed in a multimedia saturated world. However, despite the exposure to media, Messaris asked, are students indeed “media savvy”? Furthermore, he noted that one cannot assume that the consumption of visual images leads to improvement in a student’s creative abilities.
Then Messaris went on to describe the theoretical implications of the connection between visual creativity and greater cognition as defined by “spatial intelligence” (Gardner, 1983). Spatial intelligence, he noted, is the “process of forming mental representations of three-dimensional reality as a basis for understanding one’s environment and interacting with it effectively. It is a type of intelligence crucial for success in professions such as architecture or carpentry, but it is also a vital ingredient of any person’s everyday physical activities.” Messaris provided examples of how a film editor uses multiple devices for constructing meaning, including zooms, pans, transitions, focus, spatial layout, angle, etc.
Finally, Messaris discussed the implications of visual literacy for education. He noted that students must learn to create visual meaning, not just consume it. Visual connections come easy to experienced viewers. However, the ability to create multimedia creations, combining images, does not come so easily. It is a form of knowledge of a visual grammar that comes through active learning. Through the act of communicating through images, students move beyond seeing media as a “window on reality” to a more enlightened state where they are able to construct new realities through the manipulation of visual conventions. The higher order spatial and analogical thinking skills used in film editing, Messaris argued, carry over to other realms of experience; therefore, learning these skills should “be considered the core objective of an actively oriented visual curriculum” (p. 8).
Lemke, J. (2006). Toward critical multimedia literacy: Technology, Research, and Politics. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 3-14).
In school, students are taught to carefully analyze and deconstruct text. However, most often, the accompanying visual images are ignored. Although multimedia texts outweigh monomodal (writing only) texts, school-based curriculums tend to ignore visual literacy. With the rise of the World Wide Web, “reading” images has become an even more essential skill in decoding multimodal texts, such as web pages.
Lemke argued that we need a “broader definition of literacy itself, one that includes all literate practices, regardless of medium” (p. 4). Texts, he stated, are converging; television programs have websites and so do popular books, movies, and video games. For example, Harry Potter, which began as a print media phenomenon, has moved to websites, movies, television commentaries, and even video games. Similar content, images, and textual themes are distributed over a variety of media.
In light of the complexity of today’s literate activities, Lemke discussed a necessity of conceptual frameworks to help us “cope with the complexity and the novelty of these new multimedia constellations” (p. 5). The field of social semiotics (also known as critical discourse studies, critical media studies, and critical cultural studies) has been developing key concepts. The core idea of semiotics is that all human meaning-making shares a number of features. In multimedia semiotics, these common features form the basis on which the integration across media is possible. The fundamental unit is the meaning-making practice; whether that practice is sued to create or analyze such texts. The model of meaning-making applies across disciplines and across semiotic systems. In fact, we can never make meaning with one mode alone, Lemke argues,
If you speak it, your voice also makes nonlinguistic meaning by its timbre and tone, identifying you as the speaker, telling something about your physical and emotional state, and much else. If you write it, your orthography presents linguistic meaning separately from additional visual meanings (whether in your handwriting or choice of font) (p. 5).
All communication, Lemke noted, is multi-modal communication. He further defined multimodality as “the combination or integration of various sign systems or semiotic resource systems, such as language, gesture, mathematics, music, etc.” (p. 5). The resulting product is a form of gestalt—the whole not only greater than the sum of its parts, but the way in which meaning is represented through a variety of modes effectively is much more important than any one mode.
For example, when we interact with a website, we make trajectories across links that carry us to a wide variety of different genres and different media. We not only surf within sites, we surf between sites; often discovering video, audio, and interactive media that accompany more traditional words and static images. Lemke notes that we are learning “to make meaning along these traversals” that are “relatively free of the constraints of conventional genres.” Additionally, the intertexts create meanings of their own. As Lemke noted, “as our culture increasingly enmeshes us in constellations of tgextual, visual, and other themes that are designed to be distributed across multiple media and activities…these cross-activity and cross-medium connections tend to become coherently structured” (p. 7).
In terms of these multimedia texts, Lemke noted the necessity of key questions that should be answered as we prepare to teach critical multimedia literacy. In most multimodal presentations, different modes are used to represent meaning. One cannot simply deconstruct the verbal message and obtain the whole meaning. Likewise, one cannot simply decode the visual design elements. These modes must function together. Techniques of multimodal analysis must “show how text and images are selectively designed to reinforce one another” (p. 8). No single meaning can be projected across a single modality. The sign from each medium can portray a different message; some of these may create perverse or divergent meanings.
As educators, we must teach students to become specialists in critical multimedia literacy in order for them to make free and democratic choices. To be critical, Lemke notes, is not “just ot be skeptical or to identify the workings of covert interests…it is also to open up alternatives, to provide the analytical basis for the creation of new kinds of meanings” (p. 13). A true critical discourse helps students not only to critique, but to create, author, and produce multimedia texts.
My Commentary: In the new world of Web 2.0—a world where end-users, consumers, teachers, and students are creating content for themselves and peers—self-generated online texts can be created as word documents, audio files, or videos. Free video creation software, such as Windows Movie Maker, allow any individual to create and upload a fully edited, semi-professional movie to the Internet in the comfort of their own home. YouTube is a prime example of this type of opportunity. Amateurs are able to create multimedia and showcase it to an audience; this process taking, at times, no more than an hour. So how will these new multimedia literacies be defined?

Article 10
Hobbs, R. (2006). Multiple visions of multimedia literacy: Emerging areas of synthesis. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 15-28).
Hobbs, first of all, discussed the impact of “screen activity” on American children and teens, who spend, “an average of eight hours per day using media, including television, videogames, the Internet, newspapers, magazines, films, radio, recorded music, and books” (p. 15; as cited in Kaiser Family Foundation, 2001). In a world constantly bombarded with new and changing literacies, children need to be able to find and critique media messages. Educators traditionally rely on textual and language competencies. However, Hobbs noted that it is also essential for students to learn to use symbol systems (images, music, sound, motion) as a means of expression and communication. Literacy educators are beginning to recognize that they need to teach students how to read and respond to the array of media technologies in order to prepare them for the 21st century.
Educators no longer own the concept of literacy; academic scholars from a wide variety of disciplines (media studies, psychology, cultural anthropology, communications, history, library and information science, literary theory, linguistics, rhetoric, etc.) have become increasingly interested in how individuals make meaning in reading and composing multimedia texts. As a result, educators are using new literacies terminology, such as: visual literacy, media literacy, critical literacy, information literacy, and technology literacy.
Visual literacy, a field based on nearly 100 years of work by interdisciplinary scholars, has long discussed the importance of visual materials and concepts (such as selection, framing, composition, sequence, and aesthetic dimensions of images) in the classroom. Scholars interested in visual literacy have examined how images are interpreted and understood, how images and text interact in meaning-making, how exposure to visual images affects cognitive development, and how semiotic dimensions can be examined. Learning about the visual conventions of images helps give “readers” a way to analyze texts and “creators” some strategies to enhance their own productions. Texts are only representations of reality and key visual grammars exist cross-culturally in the creation of these texts.
Information literacy has been defined by the American Library Association as the abilities an individual needs to recognize when they need information and who to locate, evaluate and use it. In many instances, however, when information literacy is actually taught in the schools, it is defined as a narrow checklist of specific skills; not as a more critical analysis, a multiplicity of comprehension techniques.
Media literacy educators in the United States have been influenced by the work of British, Canadian, and Australian scholars who have discussed engaging educational practices that teach children to analyze mass media and popular culture. Good media literacy pedagogy stresses: the process of inquiry and situated active learning based on the work of Freire and Macedo.
Critical Literacy arose from traditions in semiotics and cultural studies. Meaning making in a critical literacy arena, involves the social, historical, and political contexts combined with the author’s meaning. Critical, as used by scholars, refers to the recognition of oppression and exploitation embedded in texts. Critical literacy scholars “ explore reading within a sociocultural context [and] examine and understand how various texts, including pictures, icons, and electronic messages (as forms of symbolic expression) are used to influence, persuade, and control people” (p. 19).
After defining the new literacies, Hobbs discussed a model for integrating the conceptual tenets of multimedia literacies. School practices must change to incorporate themes of authors and audiences, meanings and messages, and representations and reality. Although some evidence is emerging from research on multimedia literacies, Hobbs noted that most examinations have looked at a small number of students in a single classroom (Alvermann, Moon, & Hagood, 2001; Anderson, 1983). Some have explored whether students learn the appropriate facts through multimedia (Baron, 1985; Kellly, Bunter & Kelly, 1985) or if a video broadcast affects cognitive or critical literacy skills (Vooijs & Van de Voort, 1993). Recently, some case studies have documented educators’ practices in classrooms (Hart & Suss, 2004; Hart, 1998; Hurrell, 2001;Kist, 2000—note to self, also include Leu book here).
Hobbs noted that further research should continue to explore how and why multiliteracies are incorporated into classroom practices. Furthermore, she noted that educators must be responsive to Masterman (1985) who identified a central outcome for media education: the ability to apply skills and strategies learned in the classroom to everyday life. Such work depends on teachers who have the initiative, creativity, imagination and perseverance to enable students “to develop the competencies they need to be citizens of an information age” (p. 25).

Article 11
Royce, T.D. (2007). Intersemiotic complementarity: A framework for multimodal discourse analysis. In T. D. Royce & W. L. Bowcher (Eds.). New directions in the analysis of multimodal discourse. (pp. 63-109).
Royce noted that the theoretical foundation for multimodal discourse analysis is derived from systemic Functional Linguistic (SFL) view of language as a ‘social semiotic’ (Halliday, 1978). Halliday made four central claims about language: it is functional in terms of what it can do or what can be done with it, it is semantic in that it can be used to make meaning, it is contextual, in that meanings are affected by social and cultural situations, and semiotic in that it is a process of selecting from “the total set of options that constitute what can be meant” (Halliday, 1978, 1985, p. 53). Halliday also identified three types of meanings, which are “metafunctions” that operate simultaneously in the semantics of every language: the ideational metafunction (responsible for “the representation of experience”); the interpersonal metafunction (meaning as a form of action); and the textual metafunction (maintaining relevance to the context). Reading or viewing involves the simultaneous interplay of three elements, which correlate with the metafunctions: represented participants (elements that are actually present in the visual), interactive participants (participants interacting with each other in the act of reading—graphic designer and reader), and the visuals’ coherent structural elements (compositional features such as element of design or layout).
Royce provides a detailed analysis based the intertextuality of these factors on page 68 and 69. The interpretation deals with how visual and verbal modes interact “intersemiotically” with the: identification of participants, represented processes or activities, the circumstances, and the attributes. Teach of these aspects can be discussed in terms of Visual Message Elements. Royce further explained that the same ways that metafunction concepts can be applied to visual modes of communication, so can the analysis of cohension in text by Halliday and Hasan (1985) be used to “explicate the ideational cohesive relations between the modes in a multimodal text” For this purpose, Royce used the following sense relations: Repetition (R) for the repetition of experiential meaning; Synonymy (S) for a similar meaning; Antonymy (A) for an oppositie meaning; Hyphonymy (H) for a general class of something and subclasses; and Meronymy (M) for reference to the whole of something and its parts; and Collocation (C) for words that tend to occur in various subjects (Halliday, 1985).
Furthermore, the examination of the intersemiotic interpersonal features of a multimodal text looks at relations between the visual and viewer and how they are represented (Kress & van Leeuwen, 1996). This can be very important in terms of speech functions as distinguished by Halliday (1985): offer, command, statement, and question. Visual type can also be important, as can the level of involvement by a viewer which is recognized by visual angle or point of view. Power relations between viewers and the represented participants are encoded in the angle between them. The degree of social distance is realized by the size of frame: close up, medium, and long shot. These different kinds of shots parallel the distances people use when they talk face to face (Kress and van Leeuwen, 1990). Relationships can occur when the interpersonal meanings in both visual and verbal modes co-occurring on the same page, related through reinforcement of address and through intersemiotic attitudinal congruence and attitudinal dissonance (modality) relations. Furthermore, relationships can also occur when the compositional meanings are integrated by design features, such as: information value, salience, visual framing, visual synonymy, and potential reading paths.