Wednesday, June 13, 2007

Two more chapter summaries from Handbook of Visual Analysis

Collier is a more qualitative approach and the next one is a more qualitqative reductionist approach.
Collier, M. (2001). Approaches to analysis in visual anthropology. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
“analysis of visual records of human experience is a search for pattern and meaning, complicated and enriched by our inescapable role as participants in that experience.” P. 35.
The importance of all of the elements of an image…visual field contains a complex range of phenomena. Responsibly address many aspects of images, and “recognizing that the search for meaning and significance does not end in singular ‘facts’ or ‘truths’ butr rather produces one or more viewpoints on human circumstances,k and that while ‘reality’ may be elusive, ‘error’ is readily achieved.” P. 36
Analysis and the importance of contextual information. Making fgood research collections…good documentary photos are different from “good quality” photography. A good documentary is often presented a sa single image divorced from the larger context (my note: with digital photography, you can do both…take the wider photo and zoom in on the particulars).
A good research collection: carefully made with careful and comprehensive temporal, spatial, and other contextual recording, good annotation, collection of associated information and maintenance of this information in an organized data file.
DIRECT ANALYSIS:
“Any major analysis should begin and end with open-ended processes, with more structured investigation taking place during the mid-section of this circular journey” p. 39
The model, adapted from Collier and Collier (1986) outlines a structure for working with images.
1. first stage: observe data as a whole. Look at and listen to overtones and subtleties to discover connecting and contrasting patterns. Trust feelings and impressions. Take notes and identify images which they are a response to. Write down all questions the images trigger in your mind…these may be good for future analysis. See and respond to photos as a statement of cultural drama. Let these characterizations form a structure within which to place the remainder of your research.
2. second stage: make inventory or log of all your images. Design inventory around categories that reflect and assist research goals.
3. third stage: structure analysis. Quantitative:Go through evidence with specific questions…measure, distance, count, compare. The statistical information can be plotted on graphs, listed on tables, or entered into a computer for statistical analysis. Qualitative: produce detailed descriptions.
4. fourth stage: search for meaning significance by regturning to the complete visual record. Respond again to the data in an open manner. Re-establish context, lay out photos, view images in entirety, andthen write your conclusions as influenced by final exposure to the whole.


Jewitt, C., & Oyama, R. (2001). Visual meaning: A social semiotic approach. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
The term ‘resource’ is one of the key differences between social semiotic and Paris school structuralist semiotics


















Article 14
Bell, P. (2001). Content Analysis of Visual Images. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
This chapter deals with explicit quantifiable analysis of visual content as a research method. Content analysis is one of the most widely cited kinds of evidence in Media studies.
Begin: Content analysis begins with some precise hypothesis or question about well-defined variables. (my note: These variables should include a well defined description of the media and the modes.)
Hypotheses: which content analysis usually evauate are comparative. Researchers are usually interested in whether, say women and men are depicted more or less frequently. “content analysis is used to test explicitly comparative hypotheses by means of quantification of categories of manifest content.” P. 13
“visual content analysis is a systematic, observational method used for testing hypotheses about the ways in which the media represents people, events, situations, and so on. It allows quantification of samples of observable content classified into distinct categories. It does not analysi individual images or individual ‘visual texts’ (compared with psychoanalytical analysis (ch. 6) and semiotic methods (ch 4, 7,9). Insteaad, it allows description of fields of visual representation by describing the constituents of one or more defined areas of representation, periods or types of images.”
Typical research questions:
1. Questions of priority/salience of media content: how visible (how frequently, how large, in hwat order in a programme) different kinds of images, stories events are represented?
2. Questions of ‘bias’ comparative questions about the duration, frequency, priority or salience of representations of, say, political personalities, issues, policies, or of ‘positive’ versus ‘negative’ features of representation.
3. Historical changes in modes of representation of representation of for example, gender, occupational, class, or ethically codified images in particular types of publications or television genres.
What to analyse: ‘items’ and ‘texts’
The content can be visual, verbal, graphic, oral…A visual display as text, an advertisement as text, a news item as text…because”it has a clear frame or boundary within which the various elements of sound and image ‘cohere’, ‘make sense’ or are cohesive.’ (p. 15) texts are defined within the context of a particular research question and within the theoretical categories of the medium (television, internet) and within the genres (book, portraits, news, soap operas) on which the research focuses.
Visual content analysis isolates framed images or sequences of representation. Unlike semiotic analysis, content analysis classifies all the texts on sepecified dimensions. It is not concerned with “reading’ or interpreting each text individually. Semiotic analysis is qualitative and focuses on each text or genre in the way a critic focuses on meaning.
Analysis
• Variables: a content variable is any such dimension (size, colour, range, position on an page) or any range of options that can be substituted (ie male/female) or a number of alternative settings (kitchen, bathroom, bedroom, etc). Variables like size, represented participants, settings, priority, duration, and depicted role. Content analysis, a varieable refers to aspects of how something is represented not ‘reality’.
• Values: the values are categories and should be mutually exclusive and exhaustive. Use a coding scheme and look for themes. Visual content a
Variables
Gender Role Setting Size

Values male House duties school full situation
nurse group Partial group
female executive inside
teacher outside
• Alternatively, you could rank a duration of content emphasis in rank order. (for example in the video newscasts, you could rank the amount of time spent in a variety of roles, in types of newscast situations, using props, etc)
Quantitative results: comparisons and cross-tabulations
Compare by gender or visual modality which relies to the ‘truth value’ or creidibility of statements aobut the world (kress and vanleeuwen, 1996). Visual images also ‘represent people, places, and things a sthough they are real…or as though they are imaginings, fantasies, caricatures, etc.’ (kress and van leeuwen, 1996, p. 161). The book gave an example of a table cross-tabulating defined values of modality cross tabulated by gender. The modalities chosen were standard, factual, fantasy. (in the newscasts, we could depict the types of character, such as newscaster, interviewee, movie star, sports star, etc. and cross them by gender).
Reliability:
“degree of consistency shown by one or more coders in classifying content according to defined values on specific variables.” P. 21 inter-coder reliability (two coders) or intra-coder reliability (one coder, different occasions)
• Measuring realiatility: define variables clearly and precidesley and ensure that all coders understand these definitions in the same way.
• Train coders in applying defined criteria for each value and variable
• Measure the inter-coder consistency with which two or more coders apply criteria.
If only one coder is to be emplyed a pilot sutudy should be conducted to measure intra-coder reliability. Have coder classify 50-100 examples on all relevant variables. Correlate two sets of classifications. Use the following methods:
1. Per cent agreement: calculate how frequently two coders agree on judgements. 90 percent is recommended with two. Less than ten percent of items should fall into the “other” category. The fewer values there area on a given variable, the more likely there is to be agreement between coders based on chance.
2. Pi: a more sensitive measure of reliability. Pi= (percent observed agreement) – (percent expected agreement)/ (1 percent expected agreement). The expected frequency=sum of the squares of the expected frequency values. See page 23

Limitations: the main limitation: “the relatively untheorized concepts of messages, texts or manifest content that it claims to analyze objectively and then quantify” pg. 24 Categories of visual content usually quantified arise from commonsense social categories. Such variables are not defined in any particular theoretical context (however, what about visual analysis of websites or slide show presentations. If I use defined categories based on kress and vanleeuwen or on allessi and trollip or on Callow does this make my categories more valid?) Other limitations include:
• Marsxist and neomarxist theory…Adorno has quipped that ‘culture’ cannot be defined as quantifiable.
• Other critics cite bias
• Culturally complex and hard to quantify
• Stuart Hall (1980) violent incidents in cinematic genres are only meaningful to audiences who know the genres’ respective codes. (story structure, thematic elements, plot, character—must know the genre)
• Winston (1990) discussed ‘inference’ problems. content analysis cannot be compared with an assumed reality. Is it true or false? Is there a bias? Is it a positive or negative representation?
• Generalizing from content analysis results can be difficult. Sometimes it is assumed that users understand or are affected by media in similar ways.
• Visual representations raise further theoretical proble3ms of analysis. Many highly coded conventional genres of imagery have become media clichés. “to quantify such examples is to imply that the greater their frequency, the greater their importance.” Yet the easy legitibility of clichés makes them no more than short-hand stereotypical elements for most viewers who may not understand them in the way that the codes devixzed by a researcher imply. (p. 25)” (however, in our news media study, we are looking for appropriation of media elements and iconic representations that children take from the real world and use to “play” with “textual toys” (dyson)—how does this fit in? So our case is special. With children we are looking for these types of representations…but how?)
Validity: Going beyond the data. “to conduct a content analysis is to try to describe salient aspects of how a group of texts represents some kinds of people, processes, events, and/or interrelationships between or amongst these. However, the explicit definition and quantification that contentent analysis involes are no guarantee in themselves, that one can make valid inferenced from the data yielded by such a n empirical procedure. This si because each content analysis implicitly (or sometimes explicitly) breaks up the field of representations that it analyses into theoretically defined variables. In this way it is like any other kind of visual or textual analysis. Semiotics posits as semantically significant variables such as ‘modality’ or ‘represnted participants’ or conceptual versus narrative image elements” (p. 25).
Ask: Does the analysis yield statements that are meaningful to those who abiltually ‘read’ or ‘use’ the images?
The criticism most often leveled against content analysis is that the variables/values are somehow only spuriously objective.
Validity referes to the concept of how well a system of analysis actually measures whata it purports to measure. “ valid inferences from particular content analysis will reflect the degree of reliability in the coding procedures, the precision and clarity of definitions adopted and the adequacy of the theoretical concepts on which the coding criteria are based.” (p. 26).

No comments: