Debbie wrote on 7/31:
The book Screenagers by Douglas Rushkoff (author of ten books on media culture and values, commentator on NPR, professor at NYU Interactive telecommunications program, winner of Marshal McLuhan Award for best media book) is part of the Media Ecology Association. Here is a link to their conference for next year.http://www.media-ecology.org/activities/ It's in Mexico City...but we have some great stuff. If we took the stop motion animation week for example, we could show how play and the process of creating media intersected and created some really high level thoughts and connections about how the world operates. Fantasy/sci-fi fun is important for kids to use to understand complex realities and cooperative living structures. We have this data in our planning sheets, my documented videos during the process, and the director's cut commentaries by the kids post-production. If we don't decide to submit to this conference, we certainly should think about using this for another conference...perhaps NCTE or NRC for next year. Although the products weren't all "great" the process of "writing" and the process of "learning" that occurred (collective intelligence and cooperation in groups) was absolutely phenomenol...these are all principles Gee discussed in his book on literacy and learning in video games. Although we don't always see the significance through "teacher" gazes, if we return to innocence and look at these processes through a new lens, phenomenologically speaking "that which shows itself to someone"...we are actually seeing the transfer of Gee's principles into more traditional literacies. Our kids are structured through a traditional writing process but are still engaging in the high level learning that occurs during these cooperative processes. Bereiter discusses these learning processes in an article about hypertext Emergent Versus Presentational Hypertext. He argues that the most important learning comes through the process of creating digital works. The product is mearly a by-product of the work. The real literacy and learning occurs through the process. We are looking through "teacher" lenses and viewing the assessment and evaluation of the product. We are still caught in the traditional school web. I think if we analyzed the complex decisions, cooperation, collective intelligence, and text to text-self-world connections these kids actually made without considering the products, we'd be amazed.
Wednesday, September 19, 2007
Into their worlds
Debbie wrote on 7/28:
Something special happened during the stop motion animation that did not manifest during the other three weeks. I felt that the kids really invited us into their worlds. I'm pretty sure this happened due to the high "play" nature of the creation of stop motion animation. During the creation process, the kids were constantly engaged in creative play. As they joked with us and we joked iwth them, they let us more and more into their secret worlds. And at times our worlds collided...especially in reference to TV, movies and the Internet. One example was when we were choosing movies to show from the reality series On The Lot. Some fo the kids, familiar with the site that accompanies the show (and weekly watchers) provided their own suggestions for "group" viewing (that one is okay...that one is questionable...no way). They had seen the show and as we scrolled the titles on the website, they "helped" us censor these. Both of us, in our own spaces, had viewed and enjoyed these. However, in this third space, we both understood that only certain things were allowed in the "adults as caregivers of children" world. The children understand that adults sometimes behave badly (after all, who created all the unacceptable stuff in the first place). They also enjoy these things in their worlds. For example, a small group of boys was discussing "Epic movie" . Ahem, I had just watched this video because I was interested in the heavy pop culture references and wanted to see the interplay of stories to compare with what the children were doing. The boys spoke about their enjoyment of certain scenes. Then they quieted down at certain moments (one boy saying, hey there is an adult with us). It was almost as if they forgot I was there. I had entered their space. Then we went back into the third space and discussed some of the humor. They said that they laughed at the "bathroom" humor and would love to do it in a movie. Without judging, I reminded them of hte permanance of the film and asked them if they really wanted their parents to see something like that on premiere night. They automatically transformed back from creative-everything-goes-pop-culture-pros into kids who wanted to make their parents proud. This happened again with a group who made a fart sound at the end of their video. They wanted to reedit and make a whole series of fart sounds, creating a kind of fart symphony. This was one of James' groups so I called him over (didn't want to intervene) and they sthrough the same kind of "what would your parents think" discussion, the kids decided to eliminate the fart symphony. In fact, they actually chose another funny sound as a replacement. Interestingly, however, that new sound became symbolic of their "insider" joke that they shared with me. When I saw the final product and laughed at the squeaking sound and noticed how much the kids were laughing, they looked at me and one said, "you know". This had become a small m meme of mythic quality. This squeak symbolized the initial fart sound, the play around creating the great fart symphony, and the tangled emotions they felt as they changed their sound. However, they were still happy with their sound...maybe moreso...because it had become an inside mini-pop culture symbol, of second level semiotic symbology (Barthes, mythology). The squeaktotally lost the squeak significance. It became a hidden symbol for their desires, conflicts, and civilized cultural resolutions. Their scripts transformed as based on the creative "play" of the characters. This is something I notice when writing a novel. Through the creative interplay of my imaginary characters (in my head) the story "rewrites itself". The kids actually got to this high level of real writing as they constructed their own animations. Although they also scripted and storyboarded their projects (and realized the necessity of hte process for the mostpart) they also noticed how the story sometimes "wrote itself" (something successful published fiction writers say). It was ias if they had "animated" the writing process...it came alive for them...as it does for accomplished writers. They became autuers rather than amateurs. This whole phenomenon also manifested during the Chuck Norris scene. Apparently there are multitudes of joke sites about Chuck Norris and this is a current Internet meme. The kids knew about them and so did the counseler. so they were joking back and forth. the kids would whisper jokes and they tried to work out something hta twas appropriate. Clair text messaged her husband (they both liked the sites) and he sent a possible joke. The kids reacted instantly and she was "in" their culture for the rest of hte time. However, in addition to the play they constructed together, the kids were constantly monitoring each other...not in a don't let the teacher know kind of way...but more in a kind of respect for her position, knowing that there were some lines that could be crossed iwthin the third space and there were some things that had to remain in the kid culture. Interestingly, they know that their talk isn't much different than adult talk. Although we never engaged in this type of behavior at hte camp, they experience it elsewhere. They hear adults discussing "stuff" all the time, thinking htey don't hear or don't understand. Adults do the stuff they can't even joke about in their presence. Anyway, back to the Chuck Norris. He was created out of Clay by one of hte students. It became aalmost a Frankenstein type of scenerio with a touch of Star Wars (he uttered "I am your father") more times than they other boys in his group wanted to hear. After he created this character, he also created rules about how this character should be treated and respected (almost fatherly). This whole scenerio was amazing.
We interviewed all groups yesterday. Tara (brilliantly) had the idea to interview the students as the "director's cut". In the past few weeks, I had limited success with my "researcher" interviews (the adult in her world interviewing the kids who lived in another world). However, when the kids were in role (process drama) as the directors, actors, and creators, they jumped out of their world into the third space created through the "play" interview. Amazingly, they discussed complex plots, character devleopment, intertextual references, transmediated ideas, and underlying philisophical themes. However, also amazing, for a different reason, sometimes these failed to manifest in their actual products. Although the process was symbolic, deeply literary, and mythic (beyond something many of their parents would even understand) their products failed to show the extent of their wisdom. "oh look at those cute little animations...how they must have had fun 'playing'" However, these kids, immersed in literary dialogue through "director's cuts" and interactive websites that accompany all movies and books, are learning the inside high level literary elements, artistic design qualities, and screen literacies that most adults do not recognize. In fact, I could talk plot, character, theme, storyline with these kids as if we were sitting in an adult writing group...an inner circle of highly literate individuals. Do the kids really not understand these concepts in schools or is the way in which they are introduced to these concepts so antiquated that they don't even care to make the connections. Are they, in their own worlds, really shaking hteir heads at the ignorance of most adults. Is that show, "Are you smarter than a fifth grader" really true? Are most fifth graders really smarter than adults? Is school teaching rote knowledge over "real thinking" something that kids engage i nwith out of school literacies and play? Can these kids actually think better and more creatively than their parents??????
Okay, enough of that stream of consciousness for a while. i'll get right to the proposal and send it back this afternoon.
deb
Something special happened during the stop motion animation that did not manifest during the other three weeks. I felt that the kids really invited us into their worlds. I'm pretty sure this happened due to the high "play" nature of the creation of stop motion animation. During the creation process, the kids were constantly engaged in creative play. As they joked with us and we joked iwth them, they let us more and more into their secret worlds. And at times our worlds collided...especially in reference to TV, movies and the Internet. One example was when we were choosing movies to show from the reality series On The Lot. Some fo the kids, familiar with the site that accompanies the show (and weekly watchers) provided their own suggestions for "group" viewing (that one is okay...that one is questionable...no way). They had seen the show and as we scrolled the titles on the website, they "helped" us censor these. Both of us, in our own spaces, had viewed and enjoyed these. However, in this third space, we both understood that only certain things were allowed in the "adults as caregivers of children" world. The children understand that adults sometimes behave badly (after all, who created all the unacceptable stuff in the first place). They also enjoy these things in their worlds. For example, a small group of boys was discussing "Epic movie" . Ahem, I had just watched this video because I was interested in the heavy pop culture references and wanted to see the interplay of stories to compare with what the children were doing. The boys spoke about their enjoyment of certain scenes. Then they quieted down at certain moments (one boy saying, hey there is an adult with us). It was almost as if they forgot I was there. I had entered their space. Then we went back into the third space and discussed some of the humor. They said that they laughed at the "bathroom" humor and would love to do it in a movie. Without judging, I reminded them of hte permanance of the film and asked them if they really wanted their parents to see something like that on premiere night. They automatically transformed back from creative-everything-goes-pop-culture-pros into kids who wanted to make their parents proud. This happened again with a group who made a fart sound at the end of their video. They wanted to reedit and make a whole series of fart sounds, creating a kind of fart symphony. This was one of James' groups so I called him over (didn't want to intervene) and they sthrough the same kind of "what would your parents think" discussion, the kids decided to eliminate the fart symphony. In fact, they actually chose another funny sound as a replacement. Interestingly, however, that new sound became symbolic of their "insider" joke that they shared with me. When I saw the final product and laughed at the squeaking sound and noticed how much the kids were laughing, they looked at me and one said, "you know". This had become a small m meme of mythic quality. This squeak symbolized the initial fart sound, the play around creating the great fart symphony, and the tangled emotions they felt as they changed their sound. However, they were still happy with their sound...maybe moreso...because it had become an inside mini-pop culture symbol, of second level semiotic symbology (Barthes, mythology). The squeaktotally lost the squeak significance. It became a hidden symbol for their desires, conflicts, and civilized cultural resolutions. Their scripts transformed as based on the creative "play" of the characters. This is something I notice when writing a novel. Through the creative interplay of my imaginary characters (in my head) the story "rewrites itself". The kids actually got to this high level of real writing as they constructed their own animations. Although they also scripted and storyboarded their projects (and realized the necessity of hte process for the mostpart) they also noticed how the story sometimes "wrote itself" (something successful published fiction writers say). It was ias if they had "animated" the writing process...it came alive for them...as it does for accomplished writers. They became autuers rather than amateurs. This whole phenomenon also manifested during the Chuck Norris scene. Apparently there are multitudes of joke sites about Chuck Norris and this is a current Internet meme. The kids knew about them and so did the counseler. so they were joking back and forth. the kids would whisper jokes and they tried to work out something hta twas appropriate. Clair text messaged her husband (they both liked the sites) and he sent a possible joke. The kids reacted instantly and she was "in" their culture for the rest of hte time. However, in addition to the play they constructed together, the kids were constantly monitoring each other...not in a don't let the teacher know kind of way...but more in a kind of respect for her position, knowing that there were some lines that could be crossed iwthin the third space and there were some things that had to remain in the kid culture. Interestingly, they know that their talk isn't much different than adult talk. Although we never engaged in this type of behavior at hte camp, they experience it elsewhere. They hear adults discussing "stuff" all the time, thinking htey don't hear or don't understand. Adults do the stuff they can't even joke about in their presence. Anyway, back to the Chuck Norris. He was created out of Clay by one of hte students. It became aalmost a Frankenstein type of scenerio with a touch of Star Wars (he uttered "I am your father") more times than they other boys in his group wanted to hear. After he created this character, he also created rules about how this character should be treated and respected (almost fatherly). This whole scenerio was amazing.
We interviewed all groups yesterday. Tara (brilliantly) had the idea to interview the students as the "director's cut". In the past few weeks, I had limited success with my "researcher" interviews (the adult in her world interviewing the kids who lived in another world). However, when the kids were in role (process drama) as the directors, actors, and creators, they jumped out of their world into the third space created through the "play" interview. Amazingly, they discussed complex plots, character devleopment, intertextual references, transmediated ideas, and underlying philisophical themes. However, also amazing, for a different reason, sometimes these failed to manifest in their actual products. Although the process was symbolic, deeply literary, and mythic (beyond something many of their parents would even understand) their products failed to show the extent of their wisdom. "oh look at those cute little animations...how they must have had fun 'playing'" However, these kids, immersed in literary dialogue through "director's cuts" and interactive websites that accompany all movies and books, are learning the inside high level literary elements, artistic design qualities, and screen literacies that most adults do not recognize. In fact, I could talk plot, character, theme, storyline with these kids as if we were sitting in an adult writing group...an inner circle of highly literate individuals. Do the kids really not understand these concepts in schools or is the way in which they are introduced to these concepts so antiquated that they don't even care to make the connections. Are they, in their own worlds, really shaking hteir heads at the ignorance of most adults. Is that show, "Are you smarter than a fifth grader" really true? Are most fifth graders really smarter than adults? Is school teaching rote knowledge over "real thinking" something that kids engage i nwith out of school literacies and play? Can these kids actually think better and more creatively than their parents??????
Okay, enough of that stream of consciousness for a while. i'll get right to the proposal and send it back this afternoon.
deb
Film Semiotics
Debbie wrote on 7/29:
I spent some time reading Metz's Semiotics of the Cinema last night because I was trying to make a connect with the proposition idea and how it would apply to film. Well, this is what Metz says and I think it fits nicely into an adapted version of a Turner and Greene-like propositional base. I'm not quite sure because I don't know the Turner and Greene as well as I should. I plan to go back to Spivey this afternoon and flesh that out for myself. Anyway, this is how Metz defines units of cinema. He differentiates film and language like this "to 'speak' a language is to use it, but to 'speak cinematographic language is to a certain extent to invent it" He alsothe smallest unit that can be analyzed is the "shot" . He compares the shot to the taxeme (Hjelmslev) in that it constitutes the largest 'minimum segment (Martinet), since "at least one shot is required to make a film, or part of a film--in the same way, a linguistic statement must be made up of at least one phoneme. To isolate several shots from a sequence is still, perhaps to analyze the sequence; to remove several frames from a shot is to destroy the shot. If the shot is not the smallest unit of filmic signification (for a single shot may convey several informational elements), it is at least the smallest unit of the filmic chain." However, he also noted that "not every minimum filmic segment is a shot. Besides shots, there are other minimum segments, ‘optical devices’—various dissolves, wipes, and so on—that can be defined as visual but not photographic elements. Whereas images have the objects of reality as referents, optical procedures, which do not represent anything, have images as referents (those contiguous in the suntagma). The relationship of these procedures to the actual shooting of the film is somewhat like that of morphemes to lexemes; depending on the context, they hav etywo main functions: as “trick” devices (int his instance, they are sorts of semiological exponents influencing contiguous images), or as “punctuation.” The expression “filmic punctuation,” which use has ratified, must not make us forget that optical procefures separate large, complex statements and thus correspond to the articulations of the literary narrative 9with its pages and paragraphs, for example), whereas actual punctuation—that is to say, typographical punctuation—separates sentences (period, exclamation amrk, question mark, semicolon), and clauses (comma, semicolon, dash), apossibly even “verbal bases” withour without characteristics (apostrophe, or dash, between two “words,” and so on”.
Therefore…in my own words, I think we need to make an analogy of the proposition to a largest “minimum segment” of meaning. The shot as taxeme is one. However, we should not separate frames from the shot (like Leander did in the RRQ article) because this destroys the shot. However, there is also the minimun filmic segment of the “optical devices” such as transitions. So, using this framework, we analyze shots and transitions. Interestingly, as James and I worked slowly through one movie (00Q) we naturally seemed to separate the movie into shots. So I think this really works. Should I add this to the analysis in the AERA proposal??????
I spent some time reading Metz's Semiotics of the Cinema last night because I was trying to make a connect with the proposition idea and how it would apply to film. Well, this is what Metz says and I think it fits nicely into an adapted version of a Turner and Greene-like propositional base. I'm not quite sure because I don't know the Turner and Greene as well as I should. I plan to go back to Spivey this afternoon and flesh that out for myself. Anyway, this is how Metz defines units of cinema. He differentiates film and language like this "to 'speak' a language is to use it, but to 'speak cinematographic language is to a certain extent to invent it" He alsothe smallest unit that can be analyzed is the "shot" . He compares the shot to the taxeme (Hjelmslev) in that it constitutes the largest 'minimum segment (Martinet), since "at least one shot is required to make a film, or part of a film--in the same way, a linguistic statement must be made up of at least one phoneme. To isolate several shots from a sequence is still, perhaps to analyze the sequence; to remove several frames from a shot is to destroy the shot. If the shot is not the smallest unit of filmic signification (for a single shot may convey several informational elements), it is at least the smallest unit of the filmic chain." However, he also noted that "not every minimum filmic segment is a shot. Besides shots, there are other minimum segments, ‘optical devices’—various dissolves, wipes, and so on—that can be defined as visual but not photographic elements. Whereas images have the objects of reality as referents, optical procedures, which do not represent anything, have images as referents (those contiguous in the suntagma). The relationship of these procedures to the actual shooting of the film is somewhat like that of morphemes to lexemes; depending on the context, they hav etywo main functions: as “trick” devices (int his instance, they are sorts of semiological exponents influencing contiguous images), or as “punctuation.” The expression “filmic punctuation,” which use has ratified, must not make us forget that optical procefures separate large, complex statements and thus correspond to the articulations of the literary narrative 9with its pages and paragraphs, for example), whereas actual punctuation—that is to say, typographical punctuation—separates sentences (period, exclamation amrk, question mark, semicolon), and clauses (comma, semicolon, dash), apossibly even “verbal bases” withour without characteristics (apostrophe, or dash, between two “words,” and so on”.
Therefore…in my own words, I think we need to make an analogy of the proposition to a largest “minimum segment” of meaning. The shot as taxeme is one. However, we should not separate frames from the shot (like Leander did in the RRQ article) because this destroys the shot. However, there is also the minimun filmic segment of the “optical devices” such as transitions. So, using this framework, we analyze shots and transitions. Interestingly, as James and I worked slowly through one movie (00Q) we naturally seemed to separate the movie into shots. So I think this really works. Should I add this to the analysis in the AERA proposal??????
Saturday, July 28, 2007
List of Week 4 Animated Films
Morning Groups
Al the Pirate
Cheese Quest
The Rise of the Dancing Clay
Heroes
Lucky Penny 777
Optimus Prime 2: the Great War
The Creature Attacks
The Lost Episode
The Rise of the Freaky Phantom
Afternoon Groups
Barbie's Next Top Model
Bombing Raid
Clay vs. Lego
Globzilla
A Gift from Above
Jason vs. Jordan
The Beastly Snowman
Titanic 2: Jaws Arrives
Al the Pirate
Cheese Quest
The Rise of the Dancing Clay
Heroes
Lucky Penny 777
Optimus Prime 2: the Great War
The Creature Attacks
The Lost Episode
The Rise of the Freaky Phantom
Afternoon Groups
Barbie's Next Top Model
Bombing Raid
Clay vs. Lego
Globzilla
A Gift from Above
Jason vs. Jordan
The Beastly Snowman
Titanic 2: Jaws Arrives
List of Week 3 Films
Morning Groups
Group 1: Jailbreak
Group 2: Horror at the Tampa Theatre
Group 3: Phantom 101
Group 4: Hanna Montana Comes to the Tampa Theatre
Group 5: Jedi Academy
Group 6: Janitor Rock
Afternoon Groups
Group 1: Triskadekaphobia
Group 2: Lights Out
Group 3: Oddity
Group 4: What Men Want
Group 5: The Secret Life of Athletes
Group 6: The Last Pretzel
Group 1: Jailbreak
Group 2: Horror at the Tampa Theatre
Group 3: Phantom 101
Group 4: Hanna Montana Comes to the Tampa Theatre
Group 5: Jedi Academy
Group 6: Janitor Rock
Afternoon Groups
Group 1: Triskadekaphobia
Group 2: Lights Out
Group 3: Oddity
Group 4: What Men Want
Group 5: The Secret Life of Athletes
Group 6: The Last Pretzel
List of Week 2 Films
Morning Films
Group 1: Mario and Luigi Raiding Bowser's Castle
Group 2: Mr. Tamper's Ghost
Group 3: Girls, Ghosts, and Great Adventures
Group 4: Star Wars VIII
Group 5: The Ghost Worth Following
Afternoon Films
Group 1: Jadana
Group 2: Crazy Celebrities
Group 3: 24: The Terrorist Clowns
Group 4: Capture the Flag
Group 5: The Right Stuff
Group 6: Punked: Tampa Theatre Edition
Group 1: Mario and Luigi Raiding Bowser's Castle
Group 2: Mr. Tamper's Ghost
Group 3: Girls, Ghosts, and Great Adventures
Group 4: Star Wars VIII
Group 5: The Ghost Worth Following
Afternoon Films
Group 1: Jadana
Group 2: Crazy Celebrities
Group 3: 24: The Terrorist Clowns
Group 4: Capture the Flag
Group 5: The Right Stuff
Group 6: Punked: Tampa Theatre Edition
Politics and Other Naughty Words
From: demikoz@aol.com
Subject: pandora's box
Date: July 28, 2007 12:03:18 PM EDT
To: king@tempest.coedu.usf.edu, jlwelsh2@gmail.com
Here's another idea that's been filling my head since last night. I'll call it a parody of "Defecation Hits the Rotating Oscilator" and the "opening of pandora's box. In speaking to a variety of groups (counsellors and students alike) during the afternoon session yesterday, I noticed an interesting loosening of hte conservative teacher reins/reigns/rains (what a great word!) that usually stop the third space from leaking into their private worlds. After talking with a couple of the counsellors last night (briefly) they mentioned the one claymation video shown during the afternoon class, where the word "crap" was used. Apparently, the kids in some groups really wanted to use this word (at least two) and I heard it at least five or six times as I circulated post-movie. In fact, one group used the word "crap" IN their movie. Another group used the word "fart" in their movie. Would they have ever dared to use these words in the land where teachers live in their castles (schools)? By hearing the word"crap" in a demo video, did something (eeee-ew) begin to leak out of Pandora's box???? Hmm.
While discussing htis with three counsellors, the discussion went something like this. The one in charge said the kids really wanted to use it. She respected their wishes and really didn't see that much of a problem. One looked uncomfortable until we discussed the origin o f hte word and the fact that Mr. (can't remember his first name) Crapper invented the toilet. Then she laughed and said it was kind of like a tribute word. We all laughed...maybe nervously...did we jump out of the middle ground? What did this mean? That grou palso created a monster called "Mr. Fricken Awesome". fricken spelled like chicken (fricken chicken-already modified through pop culture ) rather than fricking .
Then there is the Political Animation group. One of hte kids (particularly brilliant and film savvy) originally brought in multiple pictures of Kenny (south park) as an idea of a movie (multiple deaths of Kenny). He also brought two other pictures...an image of George W. Bush and a Monkey. He said to me, don't they look the same. Then he looked at his paper where he had written P.A. to stand for paper animation and wrote the P.A. beside his Bush idea and said "p.a. paper animation or political animation" and laughed. When speaking with the kids aftwards on the interview the alluded to a reference in the credits about the name for Bush. They also said they hoped they wouldn't be sued and that Bush wasn't offended and maybe he really liked tacos (they had decided he could say 'I like tacos' on the screeen) . They also said they were both democrats and "everyone" loved to make fun of Bush...that is why they used his image in the movie. When speaking to the counsellor later, she said they actually wanted Bush to appear and say stupid things and actually say he was stupid. At the end of hte movie, they gave credit to "Dubya" instead of namoing Bush. The inside scoop from the counsellor is that they actually wanted to give credit to "Dumya" and she had to censor their work. You could tell she had an extreme dileme on her hands...on one hand she wanted to allow their creative genius...this was film camp not school. On the other hand, what was appropriate when the worlds collide? In convergence culture, when the kids have so much knowledge, what is happening?
So then we spoke about the political nature of the talking head showing up on the scene. Without going into the "text-to-world" connection of Bush being a talking head while all the killing goes on around him, these kids seemed to really be world intelligent. I mentioned their talk about being democrats and two of the other counsellors said that must have come from their parents. But then again, I wonder if it is just parents? I remember being in third grade and having political difficulties with my best friend. We eventually parted ways and I became friends with another girl whose political ideology was more in line with the one I was developing based on my father's strong influence (he has always talked politics and war with me). I was eight atthe time and can distinctly remember feeling saddened by this parting. I also remember why I couldn't agree with her. I took what my father taught me and applied it to the real world. Obviously these kids (at 10) can have these strong political beliefs and feelings. Except now with the internet they can actually become more literate about the situations ( much more than I could in the 70's). The god-like quality of government during hte time of war has diminished to demi-god. The public is becoming more educated world-wide. They are beginning to realize that most "truth" fed to them in the past as news or history is really faction...a semi-fictional account of fact told from a specific ideology. A fictional world does not need to be created for them. When people tire of the real world, they can easily escape into a "second-life" kind of world. Fact does not need to be disguised so much in a world where people are increasingly more intelligent and able to use their "inner fairy godmother" to resolve conflicts rather than waiting for the "wizard of oz" to tell them what they can or cannot do.
So what's a counsellor to do? Waht Are the leaders of the camp to do? Does this make its way onto the website???? Did we open Pandora's box. If so how far open did it actually get? Will the sh*t hit the fan? How do we respect the voices of these intelligent and creative children and still keep it "cute"...without really letting hte world know how insightful and intelligent they really are???????
________________________________________________________________________
AOL now offers free email to everyone. Find out more about what's free from AOL at AOL.com.
Subject: pandora's box
Date: July 28, 2007 12:03:18 PM EDT
To: king@tempest.coedu.usf.edu, jlwelsh2@gmail.com
Here's another idea that's been filling my head since last night. I'll call it a parody of "Defecation Hits the Rotating Oscilator" and the "opening of pandora's box. In speaking to a variety of groups (counsellors and students alike) during the afternoon session yesterday, I noticed an interesting loosening of hte conservative teacher reins/reigns/rains (what a great word!) that usually stop the third space from leaking into their private worlds. After talking with a couple of the counsellors last night (briefly) they mentioned the one claymation video shown during the afternoon class, where the word "crap" was used. Apparently, the kids in some groups really wanted to use this word (at least two) and I heard it at least five or six times as I circulated post-movie. In fact, one group used the word "crap" IN their movie. Another group used the word "fart" in their movie. Would they have ever dared to use these words in the land where teachers live in their castles (schools)? By hearing the word"crap" in a demo video, did something (eeee-ew) begin to leak out of Pandora's box???? Hmm.
While discussing htis with three counsellors, the discussion went something like this. The one in charge said the kids really wanted to use it. She respected their wishes and really didn't see that much of a problem. One looked uncomfortable until we discussed the origin o f hte word and the fact that Mr. (can't remember his first name) Crapper invented the toilet. Then she laughed and said it was kind of like a tribute word. We all laughed...maybe nervously...did we jump out of the middle ground? What did this mean? That grou palso created a monster called "Mr. Fricken Awesome". fricken spelled like chicken (fricken chicken-already modified through pop culture ) rather than fricking .
Then there is the Political Animation group. One of hte kids (particularly brilliant and film savvy) originally brought in multiple pictures of Kenny (south park) as an idea of a movie (multiple deaths of Kenny). He also brought two other pictures...an image of George W. Bush and a Monkey. He said to me, don't they look the same. Then he looked at his paper where he had written P.A. to stand for paper animation and wrote the P.A. beside his Bush idea and said "p.a. paper animation or political animation" and laughed. When speaking with the kids aftwards on the interview the alluded to a reference in the credits about the name for Bush. They also said they hoped they wouldn't be sued and that Bush wasn't offended and maybe he really liked tacos (they had decided he could say 'I like tacos' on the screeen) . They also said they were both democrats and "everyone" loved to make fun of Bush...that is why they used his image in the movie. When speaking to the counsellor later, she said they actually wanted Bush to appear and say stupid things and actually say he was stupid. At the end of hte movie, they gave credit to "Dubya" instead of namoing Bush. The inside scoop from the counsellor is that they actually wanted to give credit to "Dumya" and she had to censor their work. You could tell she had an extreme dileme on her hands...on one hand she wanted to allow their creative genius...this was film camp not school. On the other hand, what was appropriate when the worlds collide? In convergence culture, when the kids have so much knowledge, what is happening?
So then we spoke about the political nature of the talking head showing up on the scene. Without going into the "text-to-world" connection of Bush being a talking head while all the killing goes on around him, these kids seemed to really be world intelligent. I mentioned their talk about being democrats and two of the other counsellors said that must have come from their parents. But then again, I wonder if it is just parents? I remember being in third grade and having political difficulties with my best friend. We eventually parted ways and I became friends with another girl whose political ideology was more in line with the one I was developing based on my father's strong influence (he has always talked politics and war with me). I was eight atthe time and can distinctly remember feeling saddened by this parting. I also remember why I couldn't agree with her. I took what my father taught me and applied it to the real world. Obviously these kids (at 10) can have these strong political beliefs and feelings. Except now with the internet they can actually become more literate about the situations ( much more than I could in the 70's). The god-like quality of government during hte time of war has diminished to demi-god. The public is becoming more educated world-wide. They are beginning to realize that most "truth" fed to them in the past as news or history is really faction...a semi-fictional account of fact told from a specific ideology. A fictional world does not need to be created for them. When people tire of the real world, they can easily escape into a "second-life" kind of world. Fact does not need to be disguised so much in a world where people are increasingly more intelligent and able to use their "inner fairy godmother" to resolve conflicts rather than waiting for the "wizard of oz" to tell them what they can or cannot do.
So what's a counsellor to do? Waht Are the leaders of the camp to do? Does this make its way onto the website???? Did we open Pandora's box. If so how far open did it actually get? Will the sh*t hit the fan? How do we respect the voices of these intelligent and creative children and still keep it "cute"...without really letting hte world know how insightful and intelligent they really are???????
________________________________________________________________________
AOL now offers free email to everyone. Find out more about what's free from AOL at AOL.com.
Thursday, July 19, 2007
Prelim notes on Week 1 Afternoon Films
Group 1: Poultrygeist 2: Refried
Sequel to last summer's movie about a ghost chicken stalking people at the Tampa Theatre.
Slow mo entrance, low angle shot. Wills line delivery and removing glasses, referencing a movie? Maybe Dirty Harry or Bond? Parallel introductions for humor. Switching between two two-shots instead of one wide shot. Comic tone throughout. Chicken sound effect and shadow used to suggest monster. Tourist character frequently addresses the camera with a running commentary about the film. They are all constantly aware of the audience and their humor is self-referential. Extreme close-up on first victim's face, followed by dropping glasses in slow motion, accompanied by dramatic music. Ends with "Coming Summer 2008 - Poultrygeist 3: Burnt to a Crisp".
Group 2: Axe Effect
Parody of the Axe Effect Commercials in which a boy uses Axe body spray and then is pursued by girls. The manager of the theater hires a girl to go undercover to observe the girls who were supposed to be working. The boy is portayed by a girl. The undercover girl is disguised as a nerd. Star effect and slow tilt up and down of girl's body to show the contrast between nerd and non-nerd. It seems like the story may have shifted during the creation of the film.
Group 3 Harrietta Sculptor
The story of a girl wizard based on Harry Potter. Lightning scar on cheek. Tries to walk through wall. Used theater seats to simulate train. Radial wipe to signify passage of time? Audio problems throughout. Includes Harry Potter elements, like different classes, talking paintings, flying brooms, wands, etc. Chipmunk scene in darkness. Shadow and voice over suggests chipmunk. Outtakes over credits.
Group 4: The Making of 00Q: The Cloning Catastrophe
Behind the scenes documentary for another afternoon group project. Text on screen to show interviewer questions, coupled with music cue. They ask each actor about his or her character. Director considered making a spoof of Ghostbusters (a film from the 1980s), then considered a spoof of Indiana Jones (film series from the 80s and 90s), then thought of "Casino Royale" (a film from this year), which led him to this James Bond spoof. Interesting that most of the movies he considered spoofing were made before he was born. Videotaped play under credits.
Group 5: Monday Morning Coffee
Collection of TV parodies. The skits begin with a commercial for Hatchet Body Spray, parody of Axe. Standup comedian segment. Man-on-the-street interviews begining with two who don't answer. Third person responds "Paris" as a country. M. corrects her in his response.
Group 6: 00Q: The Cloning Catastrophe
Parody of a "James Bond" movie. The title is pronounced "double-oh cue". Slow motion walking. Voice over for cell phone caller. The way 00Q uses his cell phone communicates his character. Remote control miniature car used for 00Q's car, with revving sound effect. Exterior scenes in this movie are shot outside, which is more realistic and not typical. 00Q is wearing a lined leather jacket for the character, but it was very hot outside that day. On screen text to point out joke about 00Q burning his hand on metal from hot laser beam. 00Q is breaking into a theater in the flim reality, while the worker in the box office is clearly visible. A different camera angle would have hidden her.
James
Sequel to last summer's movie about a ghost chicken stalking people at the Tampa Theatre.
Slow mo entrance, low angle shot. Wills line delivery and removing glasses, referencing a movie? Maybe Dirty Harry or Bond? Parallel introductions for humor. Switching between two two-shots instead of one wide shot. Comic tone throughout. Chicken sound effect and shadow used to suggest monster. Tourist character frequently addresses the camera with a running commentary about the film. They are all constantly aware of the audience and their humor is self-referential. Extreme close-up on first victim's face, followed by dropping glasses in slow motion, accompanied by dramatic music. Ends with "Coming Summer 2008 - Poultrygeist 3: Burnt to a Crisp".
Group 2: Axe Effect
Parody of the Axe Effect Commercials in which a boy uses Axe body spray and then is pursued by girls. The manager of the theater hires a girl to go undercover to observe the girls who were supposed to be working. The boy is portayed by a girl. The undercover girl is disguised as a nerd. Star effect and slow tilt up and down of girl's body to show the contrast between nerd and non-nerd. It seems like the story may have shifted during the creation of the film.
Group 3 Harrietta Sculptor
The story of a girl wizard based on Harry Potter. Lightning scar on cheek. Tries to walk through wall. Used theater seats to simulate train. Radial wipe to signify passage of time? Audio problems throughout. Includes Harry Potter elements, like different classes, talking paintings, flying brooms, wands, etc. Chipmunk scene in darkness. Shadow and voice over suggests chipmunk. Outtakes over credits.
Group 4: The Making of 00Q: The Cloning Catastrophe
Behind the scenes documentary for another afternoon group project. Text on screen to show interviewer questions, coupled with music cue. They ask each actor about his or her character. Director considered making a spoof of Ghostbusters (a film from the 1980s), then considered a spoof of Indiana Jones (film series from the 80s and 90s), then thought of "Casino Royale" (a film from this year), which led him to this James Bond spoof. Interesting that most of the movies he considered spoofing were made before he was born. Videotaped play under credits.
Group 5: Monday Morning Coffee
Collection of TV parodies. The skits begin with a commercial for Hatchet Body Spray, parody of Axe. Standup comedian segment. Man-on-the-street interviews begining with two who don't answer. Third person responds "Paris" as a country. M. corrects her in his response.
Group 6: 00Q: The Cloning Catastrophe
Parody of a "James Bond" movie. The title is pronounced "double-oh cue". Slow motion walking. Voice over for cell phone caller. The way 00Q uses his cell phone communicates his character. Remote control miniature car used for 00Q's car, with revving sound effect. Exterior scenes in this movie are shot outside, which is more realistic and not typical. 00Q is wearing a lined leather jacket for the character, but it was very hot outside that day. On screen text to point out joke about 00Q burning his hand on metal from hot laser beam. 00Q is breaking into a theater in the flim reality, while the worker in the box office is clearly visible. A different camera angle would have hidden her.
James
Wednesday, July 18, 2007
Prelim Notes on Week 1 Morning Films
Group 1: Tampa Theatre's Phantom
Two girls and their grandfather are attacked by a phantom in the Tampa Theatre.
Opens with black and white footage and music for segment that took place in the past. Extreme closeup. Suspense music. Text on screen to indicate change in time period.
Group 2: It!
A boy finds a magic stone that lets him reverse time.
Sound effect and reverse effect used in combination. Twinkle effect used on magic stone. Text on screen to help explain plot. Outtakes over credits
Group 3: The Ghost of Anna Maria
Girls meet a friendly ghost and solve a mystery. Gender analysis? Aged film effect to show past - inconsistent. Text on screen to indicate time passage. "Will they ever learn?" ending. Outtakes over credits.
Group 4: Napoleon Dynamite 2: Return to the Past
Secret agent Napoleon Dynamite is sent to the past to recover the Hope diamond.
Shirts, dialogue, and body movements mimic pop character. Aged film effect to show action in the past. Mixing up identical suitcases device. Text on screen to indicate time period.
Group 5: The Phantom of the Tampa Theatre
One friend dares another to stay overnight in the Tampa Theatre. He does and he gets killed by the Phantom. Dutch angles. Point of view shots. Closeup adds tension. Slow motion, why? Ghost trails effect for chase scene. Thunder and fog effects in conjunction. "They'll never find me here." with phantom behind him. CU of watch. Picture of group over credits.
Two girls and their grandfather are attacked by a phantom in the Tampa Theatre.
Opens with black and white footage and music for segment that took place in the past. Extreme closeup. Suspense music. Text on screen to indicate change in time period.
Group 2: It!
A boy finds a magic stone that lets him reverse time.
Sound effect and reverse effect used in combination. Twinkle effect used on magic stone. Text on screen to help explain plot. Outtakes over credits
Group 3: The Ghost of Anna Maria
Girls meet a friendly ghost and solve a mystery. Gender analysis? Aged film effect to show past - inconsistent. Text on screen to indicate time passage. "Will they ever learn?" ending. Outtakes over credits.
Group 4: Napoleon Dynamite 2: Return to the Past
Secret agent Napoleon Dynamite is sent to the past to recover the Hope diamond.
Shirts, dialogue, and body movements mimic pop character. Aged film effect to show action in the past. Mixing up identical suitcases device. Text on screen to indicate time period.
Group 5: The Phantom of the Tampa Theatre
One friend dares another to stay overnight in the Tampa Theatre. He does and he gets killed by the Phantom. Dutch angles. Point of view shots. Closeup adds tension. Slow motion, why? Ghost trails effect for chase scene. Thunder and fog effects in conjunction. "They'll never find me here." with phantom behind him. CU of watch. Picture of group over credits.
List of Week 1 Films
AM Groups
Group 1: Tampa Theatre's Phantom
Group 2: It!
Group 3: The Ghost of Anna Maria
Group 4: Napoleon Dynamite 2: Return to the Past
Group 5: The Phantom of the Tampa Theatre
PM Groups
Group 1: Poultrygeist 2: Refried
Group 2: Axe Effect
Group 3: Harrietta Sculptor
Group 4: The Making of "00Q: The Cloning Catastrophe"
Group 5: Monday Morning Coffee
Group 6: 00Q: The Cloning Catastrophe
Group 1: Tampa Theatre's Phantom
Group 2: It!
Group 3: The Ghost of Anna Maria
Group 4: Napoleon Dynamite 2: Return to the Past
Group 5: The Phantom of the Tampa Theatre
PM Groups
Group 1: Poultrygeist 2: Refried
Group 2: Axe Effect
Group 3: Harrietta Sculptor
Group 4: The Making of "00Q: The Cloning Catastrophe"
Group 5: Monday Morning Coffee
Group 6: 00Q: The Cloning Catastrophe
Sunday, July 1, 2007
Memes and Rhyzomes
On Friday I spent the afternoon surfing the web for articles on memes. Dawkins was the originator of the term in his seminal text the Selfish Gene in 1976. Since then, a variety of researchers, theorists, and pop-culturalists ahve latched onto the term. Some theorists have tried (in my opinion) too hard to define memes as entities onto themselves. by sticking hard and fast to the analogy with genes, memes risk becoming trapped inside a metaphor. This is a meme, it is just like this, it will always be like this unless something from the outside changes it. in these articles, memes take on a structure like Evolution. I think it's more like chaos theory. I think memes function more like rhizomes (Deleuze and Guattari). Although external forces can work on them to transform them, sometimes they just take on a life of their own....So how should we analyze our memes????
Hayles, N. Katherine
Desiring Agency: Limiting Metaphors and Enabling Constraints in Dawkins and Deleuze/Guattari
SubStance - Issue 94/95 (Volume 30, Number 1&2), 2001, pp. 144-159
University of Wisconsin Press
SubStance 30.1&2 (2001) 144-159 _________________________________________________________________ [Access article in PDF] Desiring Agency: Limiting Metaphors and Enabling Constraints in Dawkins and Deleuze/Guattari N. Katherine Hayles _________________________________________________________________ Recent work in the cultural studies of science has shown the importance of metaphoric networks for scientific inquiry. Sometimes these networks have functioned to lead scientists in the wrong direction. For example, metaphoric equations developed in nineteenth-century physiology mapped Africans, women, and animals onto one another to the detriment of all three categories, as Nancy Leys Stepan has shown. But more often, metaphors have opened up fruitful lines of inquiry, as when Norbert Weiner saw metaphoric correspondences between prosthetic devices and cybernetic machines ("Sound Communication"). It is not easy to determine where the limits of metaphor should be drawn. In some sense almost all language can be considered metaphoric, as Michael Arbib and Mary Hesse argue in discussing metaphoric resonance in measurement. Indeed, even mathematics can be considered metaphorical, as Norbert Weiner pointed out when he observed that mathematics was "the most colossal metaphor imaginable" (Human Use, 95). So can sense perception, as Walter Freeman and Gregory Bateson among others have argued, for perceptual experiences are metaphors for reality rather than representations of reality. In Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, George Lakoff and Mark Johnson give this idea a linguistic turn when they argue that metaphor connects abstract thought with embodied experience, providing a grounding we often fail to see precisely because it is so pervasive and fundamental. These diverse...
Hayles, N. Katherine
Desiring Agency: Limiting Metaphors and Enabling Constraints in Dawkins and Deleuze/Guattari
SubStance - Issue 94/95 (Volume 30, Number 1&2), 2001, pp. 144-159
University of Wisconsin Press
SubStance 30.1&2 (2001) 144-159 _________________________________________________________________ [Access article in PDF] Desiring Agency: Limiting Metaphors and Enabling Constraints in Dawkins and Deleuze/Guattari N. Katherine Hayles _________________________________________________________________ Recent work in the cultural studies of science has shown the importance of metaphoric networks for scientific inquiry. Sometimes these networks have functioned to lead scientists in the wrong direction. For example, metaphoric equations developed in nineteenth-century physiology mapped Africans, women, and animals onto one another to the detriment of all three categories, as Nancy Leys Stepan has shown. But more often, metaphors have opened up fruitful lines of inquiry, as when Norbert Weiner saw metaphoric correspondences between prosthetic devices and cybernetic machines ("Sound Communication"). It is not easy to determine where the limits of metaphor should be drawn. In some sense almost all language can be considered metaphoric, as Michael Arbib and Mary Hesse argue in discussing metaphoric resonance in measurement. Indeed, even mathematics can be considered metaphorical, as Norbert Weiner pointed out when he observed that mathematics was "the most colossal metaphor imaginable" (Human Use, 95). So can sense perception, as Walter Freeman and Gregory Bateson among others have argued, for perceptual experiences are metaphors for reality rather than representations of reality. In Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, George Lakoff and Mark Johnson give this idea a linguistic turn when they argue that metaphor connects abstract thought with embodied experience, providing a grounding we often fail to see precisely because it is so pervasive and fundamental. These diverse...
Wednesday, June 13, 2007
Two more chapter summaries from Handbook of Visual Analysis
Collier is a more qualitative approach and the next one is a more qualitqative reductionist approach.
Collier, M. (2001). Approaches to analysis in visual anthropology. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
“analysis of visual records of human experience is a search for pattern and meaning, complicated and enriched by our inescapable role as participants in that experience.” P. 35.
The importance of all of the elements of an image…visual field contains a complex range of phenomena. Responsibly address many aspects of images, and “recognizing that the search for meaning and significance does not end in singular ‘facts’ or ‘truths’ butr rather produces one or more viewpoints on human circumstances,k and that while ‘reality’ may be elusive, ‘error’ is readily achieved.” P. 36
Analysis and the importance of contextual information. Making fgood research collections…good documentary photos are different from “good quality” photography. A good documentary is often presented a sa single image divorced from the larger context (my note: with digital photography, you can do both…take the wider photo and zoom in on the particulars).
A good research collection: carefully made with careful and comprehensive temporal, spatial, and other contextual recording, good annotation, collection of associated information and maintenance of this information in an organized data file.
DIRECT ANALYSIS:
“Any major analysis should begin and end with open-ended processes, with more structured investigation taking place during the mid-section of this circular journey” p. 39
The model, adapted from Collier and Collier (1986) outlines a structure for working with images.
1. first stage: observe data as a whole. Look at and listen to overtones and subtleties to discover connecting and contrasting patterns. Trust feelings and impressions. Take notes and identify images which they are a response to. Write down all questions the images trigger in your mind…these may be good for future analysis. See and respond to photos as a statement of cultural drama. Let these characterizations form a structure within which to place the remainder of your research.
2. second stage: make inventory or log of all your images. Design inventory around categories that reflect and assist research goals.
3. third stage: structure analysis. Quantitative:Go through evidence with specific questions…measure, distance, count, compare. The statistical information can be plotted on graphs, listed on tables, or entered into a computer for statistical analysis. Qualitative: produce detailed descriptions.
4. fourth stage: search for meaning significance by regturning to the complete visual record. Respond again to the data in an open manner. Re-establish context, lay out photos, view images in entirety, andthen write your conclusions as influenced by final exposure to the whole.
Jewitt, C., & Oyama, R. (2001). Visual meaning: A social semiotic approach. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
The term ‘resource’ is one of the key differences between social semiotic and Paris school structuralist semiotics
Article 14
Bell, P. (2001). Content Analysis of Visual Images. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
This chapter deals with explicit quantifiable analysis of visual content as a research method. Content analysis is one of the most widely cited kinds of evidence in Media studies.
Begin: Content analysis begins with some precise hypothesis or question about well-defined variables. (my note: These variables should include a well defined description of the media and the modes.)
Hypotheses: which content analysis usually evauate are comparative. Researchers are usually interested in whether, say women and men are depicted more or less frequently. “content analysis is used to test explicitly comparative hypotheses by means of quantification of categories of manifest content.” P. 13
“visual content analysis is a systematic, observational method used for testing hypotheses about the ways in which the media represents people, events, situations, and so on. It allows quantification of samples of observable content classified into distinct categories. It does not analysi individual images or individual ‘visual texts’ (compared with psychoanalytical analysis (ch. 6) and semiotic methods (ch 4, 7,9). Insteaad, it allows description of fields of visual representation by describing the constituents of one or more defined areas of representation, periods or types of images.”
Typical research questions:
1. Questions of priority/salience of media content: how visible (how frequently, how large, in hwat order in a programme) different kinds of images, stories events are represented?
2. Questions of ‘bias’ comparative questions about the duration, frequency, priority or salience of representations of, say, political personalities, issues, policies, or of ‘positive’ versus ‘negative’ features of representation.
3. Historical changes in modes of representation of representation of for example, gender, occupational, class, or ethically codified images in particular types of publications or television genres.
What to analyse: ‘items’ and ‘texts’
The content can be visual, verbal, graphic, oral…A visual display as text, an advertisement as text, a news item as text…because”it has a clear frame or boundary within which the various elements of sound and image ‘cohere’, ‘make sense’ or are cohesive.’ (p. 15) texts are defined within the context of a particular research question and within the theoretical categories of the medium (television, internet) and within the genres (book, portraits, news, soap operas) on which the research focuses.
Visual content analysis isolates framed images or sequences of representation. Unlike semiotic analysis, content analysis classifies all the texts on sepecified dimensions. It is not concerned with “reading’ or interpreting each text individually. Semiotic analysis is qualitative and focuses on each text or genre in the way a critic focuses on meaning.
Analysis
• Variables: a content variable is any such dimension (size, colour, range, position on an page) or any range of options that can be substituted (ie male/female) or a number of alternative settings (kitchen, bathroom, bedroom, etc). Variables like size, represented participants, settings, priority, duration, and depicted role. Content analysis, a varieable refers to aspects of how something is represented not ‘reality’.
• Values: the values are categories and should be mutually exclusive and exhaustive. Use a coding scheme and look for themes. Visual content a
Variables
Gender Role Setting Size
Values male House duties school full situation
nurse group Partial group
female executive inside
teacher outside
• Alternatively, you could rank a duration of content emphasis in rank order. (for example in the video newscasts, you could rank the amount of time spent in a variety of roles, in types of newscast situations, using props, etc)
Quantitative results: comparisons and cross-tabulations
Compare by gender or visual modality which relies to the ‘truth value’ or creidibility of statements aobut the world (kress and vanleeuwen, 1996). Visual images also ‘represent people, places, and things a sthough they are real…or as though they are imaginings, fantasies, caricatures, etc.’ (kress and van leeuwen, 1996, p. 161). The book gave an example of a table cross-tabulating defined values of modality cross tabulated by gender. The modalities chosen were standard, factual, fantasy. (in the newscasts, we could depict the types of character, such as newscaster, interviewee, movie star, sports star, etc. and cross them by gender).
Reliability:
“degree of consistency shown by one or more coders in classifying content according to defined values on specific variables.” P. 21 inter-coder reliability (two coders) or intra-coder reliability (one coder, different occasions)
• Measuring realiatility: define variables clearly and precidesley and ensure that all coders understand these definitions in the same way.
• Train coders in applying defined criteria for each value and variable
• Measure the inter-coder consistency with which two or more coders apply criteria.
If only one coder is to be emplyed a pilot sutudy should be conducted to measure intra-coder reliability. Have coder classify 50-100 examples on all relevant variables. Correlate two sets of classifications. Use the following methods:
1. Per cent agreement: calculate how frequently two coders agree on judgements. 90 percent is recommended with two. Less than ten percent of items should fall into the “other” category. The fewer values there area on a given variable, the more likely there is to be agreement between coders based on chance.
2. Pi: a more sensitive measure of reliability. Pi= (percent observed agreement) – (percent expected agreement)/ (1 percent expected agreement). The expected frequency=sum of the squares of the expected frequency values. See page 23
Limitations: the main limitation: “the relatively untheorized concepts of messages, texts or manifest content that it claims to analyze objectively and then quantify” pg. 24 Categories of visual content usually quantified arise from commonsense social categories. Such variables are not defined in any particular theoretical context (however, what about visual analysis of websites or slide show presentations. If I use defined categories based on kress and vanleeuwen or on allessi and trollip or on Callow does this make my categories more valid?) Other limitations include:
• Marsxist and neomarxist theory…Adorno has quipped that ‘culture’ cannot be defined as quantifiable.
• Other critics cite bias
• Culturally complex and hard to quantify
• Stuart Hall (1980) violent incidents in cinematic genres are only meaningful to audiences who know the genres’ respective codes. (story structure, thematic elements, plot, character—must know the genre)
• Winston (1990) discussed ‘inference’ problems. content analysis cannot be compared with an assumed reality. Is it true or false? Is there a bias? Is it a positive or negative representation?
• Generalizing from content analysis results can be difficult. Sometimes it is assumed that users understand or are affected by media in similar ways.
• Visual representations raise further theoretical proble3ms of analysis. Many highly coded conventional genres of imagery have become media clichés. “to quantify such examples is to imply that the greater their frequency, the greater their importance.” Yet the easy legitibility of clichés makes them no more than short-hand stereotypical elements for most viewers who may not understand them in the way that the codes devixzed by a researcher imply. (p. 25)” (however, in our news media study, we are looking for appropriation of media elements and iconic representations that children take from the real world and use to “play” with “textual toys” (dyson)—how does this fit in? So our case is special. With children we are looking for these types of representations…but how?)
Validity: Going beyond the data. “to conduct a content analysis is to try to describe salient aspects of how a group of texts represents some kinds of people, processes, events, and/or interrelationships between or amongst these. However, the explicit definition and quantification that contentent analysis involes are no guarantee in themselves, that one can make valid inferenced from the data yielded by such a n empirical procedure. This si because each content analysis implicitly (or sometimes explicitly) breaks up the field of representations that it analyses into theoretically defined variables. In this way it is like any other kind of visual or textual analysis. Semiotics posits as semantically significant variables such as ‘modality’ or ‘represnted participants’ or conceptual versus narrative image elements” (p. 25).
Ask: Does the analysis yield statements that are meaningful to those who abiltually ‘read’ or ‘use’ the images?
The criticism most often leveled against content analysis is that the variables/values are somehow only spuriously objective.
Validity referes to the concept of how well a system of analysis actually measures whata it purports to measure. “ valid inferences from particular content analysis will reflect the degree of reliability in the coding procedures, the precision and clarity of definitions adopted and the adequacy of the theoretical concepts on which the coding criteria are based.” (p. 26).
Collier, M. (2001). Approaches to analysis in visual anthropology. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
“analysis of visual records of human experience is a search for pattern and meaning, complicated and enriched by our inescapable role as participants in that experience.” P. 35.
The importance of all of the elements of an image…visual field contains a complex range of phenomena. Responsibly address many aspects of images, and “recognizing that the search for meaning and significance does not end in singular ‘facts’ or ‘truths’ butr rather produces one or more viewpoints on human circumstances,k and that while ‘reality’ may be elusive, ‘error’ is readily achieved.” P. 36
Analysis and the importance of contextual information. Making fgood research collections…good documentary photos are different from “good quality” photography. A good documentary is often presented a sa single image divorced from the larger context (my note: with digital photography, you can do both…take the wider photo and zoom in on the particulars).
A good research collection: carefully made with careful and comprehensive temporal, spatial, and other contextual recording, good annotation, collection of associated information and maintenance of this information in an organized data file.
DIRECT ANALYSIS:
“Any major analysis should begin and end with open-ended processes, with more structured investigation taking place during the mid-section of this circular journey” p. 39
The model, adapted from Collier and Collier (1986) outlines a structure for working with images.
1. first stage: observe data as a whole. Look at and listen to overtones and subtleties to discover connecting and contrasting patterns. Trust feelings and impressions. Take notes and identify images which they are a response to. Write down all questions the images trigger in your mind…these may be good for future analysis. See and respond to photos as a statement of cultural drama. Let these characterizations form a structure within which to place the remainder of your research.
2. second stage: make inventory or log of all your images. Design inventory around categories that reflect and assist research goals.
3. third stage: structure analysis. Quantitative:Go through evidence with specific questions…measure, distance, count, compare. The statistical information can be plotted on graphs, listed on tables, or entered into a computer for statistical analysis. Qualitative: produce detailed descriptions.
4. fourth stage: search for meaning significance by regturning to the complete visual record. Respond again to the data in an open manner. Re-establish context, lay out photos, view images in entirety, andthen write your conclusions as influenced by final exposure to the whole.
Jewitt, C., & Oyama, R. (2001). Visual meaning: A social semiotic approach. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
The term ‘resource’ is one of the key differences between social semiotic and Paris school structuralist semiotics
Article 14
Bell, P. (2001). Content Analysis of Visual Images. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
This chapter deals with explicit quantifiable analysis of visual content as a research method. Content analysis is one of the most widely cited kinds of evidence in Media studies.
Begin: Content analysis begins with some precise hypothesis or question about well-defined variables. (my note: These variables should include a well defined description of the media and the modes.)
Hypotheses: which content analysis usually evauate are comparative. Researchers are usually interested in whether, say women and men are depicted more or less frequently. “content analysis is used to test explicitly comparative hypotheses by means of quantification of categories of manifest content.” P. 13
“visual content analysis is a systematic, observational method used for testing hypotheses about the ways in which the media represents people, events, situations, and so on. It allows quantification of samples of observable content classified into distinct categories. It does not analysi individual images or individual ‘visual texts’ (compared with psychoanalytical analysis (ch. 6) and semiotic methods (ch 4, 7,9). Insteaad, it allows description of fields of visual representation by describing the constituents of one or more defined areas of representation, periods or types of images.”
Typical research questions:
1. Questions of priority/salience of media content: how visible (how frequently, how large, in hwat order in a programme) different kinds of images, stories events are represented?
2. Questions of ‘bias’ comparative questions about the duration, frequency, priority or salience of representations of, say, political personalities, issues, policies, or of ‘positive’ versus ‘negative’ features of representation.
3. Historical changes in modes of representation of representation of for example, gender, occupational, class, or ethically codified images in particular types of publications or television genres.
What to analyse: ‘items’ and ‘texts’
The content can be visual, verbal, graphic, oral…A visual display as text, an advertisement as text, a news item as text…because”it has a clear frame or boundary within which the various elements of sound and image ‘cohere’, ‘make sense’ or are cohesive.’ (p. 15) texts are defined within the context of a particular research question and within the theoretical categories of the medium (television, internet) and within the genres (book, portraits, news, soap operas) on which the research focuses.
Visual content analysis isolates framed images or sequences of representation. Unlike semiotic analysis, content analysis classifies all the texts on sepecified dimensions. It is not concerned with “reading’ or interpreting each text individually. Semiotic analysis is qualitative and focuses on each text or genre in the way a critic focuses on meaning.
Analysis
• Variables: a content variable is any such dimension (size, colour, range, position on an page) or any range of options that can be substituted (ie male/female) or a number of alternative settings (kitchen, bathroom, bedroom, etc). Variables like size, represented participants, settings, priority, duration, and depicted role. Content analysis, a varieable refers to aspects of how something is represented not ‘reality’.
• Values: the values are categories and should be mutually exclusive and exhaustive. Use a coding scheme and look for themes. Visual content a
Variables
Gender Role Setting Size
Values male House duties school full situation
nurse group Partial group
female executive inside
teacher outside
• Alternatively, you could rank a duration of content emphasis in rank order. (for example in the video newscasts, you could rank the amount of time spent in a variety of roles, in types of newscast situations, using props, etc)
Quantitative results: comparisons and cross-tabulations
Compare by gender or visual modality which relies to the ‘truth value’ or creidibility of statements aobut the world (kress and vanleeuwen, 1996). Visual images also ‘represent people, places, and things a sthough they are real…or as though they are imaginings, fantasies, caricatures, etc.’ (kress and van leeuwen, 1996, p. 161). The book gave an example of a table cross-tabulating defined values of modality cross tabulated by gender. The modalities chosen were standard, factual, fantasy. (in the newscasts, we could depict the types of character, such as newscaster, interviewee, movie star, sports star, etc. and cross them by gender).
Reliability:
“degree of consistency shown by one or more coders in classifying content according to defined values on specific variables.” P. 21 inter-coder reliability (two coders) or intra-coder reliability (one coder, different occasions)
• Measuring realiatility: define variables clearly and precidesley and ensure that all coders understand these definitions in the same way.
• Train coders in applying defined criteria for each value and variable
• Measure the inter-coder consistency with which two or more coders apply criteria.
If only one coder is to be emplyed a pilot sutudy should be conducted to measure intra-coder reliability. Have coder classify 50-100 examples on all relevant variables. Correlate two sets of classifications. Use the following methods:
1. Per cent agreement: calculate how frequently two coders agree on judgements. 90 percent is recommended with two. Less than ten percent of items should fall into the “other” category. The fewer values there area on a given variable, the more likely there is to be agreement between coders based on chance.
2. Pi: a more sensitive measure of reliability. Pi= (percent observed agreement) – (percent expected agreement)/ (1 percent expected agreement). The expected frequency=sum of the squares of the expected frequency values. See page 23
Limitations: the main limitation: “the relatively untheorized concepts of messages, texts or manifest content that it claims to analyze objectively and then quantify” pg. 24 Categories of visual content usually quantified arise from commonsense social categories. Such variables are not defined in any particular theoretical context (however, what about visual analysis of websites or slide show presentations. If I use defined categories based on kress and vanleeuwen or on allessi and trollip or on Callow does this make my categories more valid?) Other limitations include:
• Marsxist and neomarxist theory…Adorno has quipped that ‘culture’ cannot be defined as quantifiable.
• Other critics cite bias
• Culturally complex and hard to quantify
• Stuart Hall (1980) violent incidents in cinematic genres are only meaningful to audiences who know the genres’ respective codes. (story structure, thematic elements, plot, character—must know the genre)
• Winston (1990) discussed ‘inference’ problems. content analysis cannot be compared with an assumed reality. Is it true or false? Is there a bias? Is it a positive or negative representation?
• Generalizing from content analysis results can be difficult. Sometimes it is assumed that users understand or are affected by media in similar ways.
• Visual representations raise further theoretical proble3ms of analysis. Many highly coded conventional genres of imagery have become media clichés. “to quantify such examples is to imply that the greater their frequency, the greater their importance.” Yet the easy legitibility of clichés makes them no more than short-hand stereotypical elements for most viewers who may not understand them in the way that the codes devixzed by a researcher imply. (p. 25)” (however, in our news media study, we are looking for appropriation of media elements and iconic representations that children take from the real world and use to “play” with “textual toys” (dyson)—how does this fit in? So our case is special. With children we are looking for these types of representations…but how?)
Validity: Going beyond the data. “to conduct a content analysis is to try to describe salient aspects of how a group of texts represents some kinds of people, processes, events, and/or interrelationships between or amongst these. However, the explicit definition and quantification that contentent analysis involes are no guarantee in themselves, that one can make valid inferenced from the data yielded by such a n empirical procedure. This si because each content analysis implicitly (or sometimes explicitly) breaks up the field of representations that it analyses into theoretically defined variables. In this way it is like any other kind of visual or textual analysis. Semiotics posits as semantically significant variables such as ‘modality’ or ‘represnted participants’ or conceptual versus narrative image elements” (p. 25).
Ask: Does the analysis yield statements that are meaningful to those who abiltually ‘read’ or ‘use’ the images?
The criticism most often leveled against content analysis is that the variables/values are somehow only spuriously objective.
Validity referes to the concept of how well a system of analysis actually measures whata it purports to measure. “ valid inferences from particular content analysis will reflect the degree of reliability in the coding procedures, the precision and clarity of definitions adopted and the adequacy of the theoretical concepts on which the coding criteria are based.” (p. 26).
vanLeeuwen-analyzing visual texts through iconography
This is a summary of one of hte chapters from Handbook of Visual Analysis. I think we should further explore the notion of visual semiotics and iconography. Also, as noted in Rose (the chapter on semiotic analysis) I think we should further investigate Barthes notion of "mythology". I think that Barthes' mythology is a good way to think of our memes...citing Rose: myth is thus a form of ideoilogy...but the myth is believable precisely becaue form does not entirely replace meaning...the interpretation of mythologies requires a broad understanding of a culture's dynamics". Therefore, like memes, in terms of information literacy, the more you know, the more you see. The more you know, the more you see, the more you can make interesting meaning. One other really intersting notion of Barthes is htat he notes: "myth is not defined by the object of its mesage, but by the way in which it utters that message: there are formal litmits to myth, there are no substantial ones" (pl 117) Myth is a "second order semiological system" (p. 123) This is a double order meaning system. Individuals who are visually and media literate will be able to interpret the secondogical system" Myth builds on first order signs...with a signifier and signified. However, the denotative sign becomes a signifier at hte second or mythological Or memetic level of meaning. at this second level the signifier is accompanied by its own signified. The first level of meaning is the form, the sifgnified a concept. But at htis second level, at the level of myth or memetic meaning, this is signification. When image becomes form, the richness of the image is left behind and the gap is filled iwth signification. Myth makes us forget that things were and are made but naturalizes the way things are.(rose, p. 91). Therefore, when we insert memes into movies, we are constucting virtual realities beyond the first level meaning of the simple form. Additionally, using Rosenblatt's theory of transaction between reader and text, these meanings are derived through personal experience and the interaction between reader and text. Also, the meanings change based on the school-based literacy and other literacies of individuals. For example, people well versed in pop culture will find more meaning in certain types of media.
Here is the chapter summary...
Van Leeuwen, T. (2001). Semiotics and iconography. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
In this book chapter, van Leeuwen discusses two approaches to visual analysis: visual semiotics of Roland Barthes (1973, 1977) and iconography. He began by discussing how the two approaches search for the meaning of representation and the question of the hidden meanings of images. However, while the Barthes notion studies the image, treating cultural meanings as a given currency shared by all in a particular culture; iconography also attends to the context and how and why cultural meanings came about historically.
In Barthes’ semiotics, the key is the layering of meaning; the first meaning is the denotation (who or what is depicted) and the second layer is the connotation (what ideas and values are expressed through what is represented and how it is represented). For Barthes, denotation is relatively simple. Perceiving photographs is close to perceiving reality because they provide a point by point reference in terms of denotation. The first layer of interpretation is to simply recognize what we already know. Although denotation is partially “up to the eye of the beholder” it also depends on the context. These pointers relate to problems inherent in a Barthian description of visual denotation, factors which can change the meaning: categorization (including the use of captions); groups vs individuals (can have a similar effect); distancing (zooming); and surrounding text (or pictures).
The second layer of meaning, according to Barthes, is connotation—the layer of ideas and values, what things ‘stand for’ or ‘are signs of’. According to Barthes, this idea is already established as part of a cultural norm. For example, specific photographic techniques (zoom, shutter speed, effects) have been defined by Barthes as ‘myths’ in that they are first very broad concepts but they link together everything associated with a single entity. These are also ideological meanings, serving the status quo or the interests of those in power. Barthes further described the unwritten ‘dictionary’ of poses that can color meanings; he described the posing of objects where meaning comes from the object photographed as a ‘lexicon’. However, the specific parts of images are not simply a series of discontinuous ‘dictionary entries” but Barthes also reads them together as a “discursive reading of object-signs’ (1977, p. 24). Therefore, there is a ‘syntax’ because the ‘signifier of connotation is no longer to be found at the level of any one of the feragments of the sequence but at that…of the concatenation’ (1977, p. 24).
Connotation can also come through the style of artwork or photogenia, the techniques of photography such as ‘framing, distance, lighting, focus, speed’ (1977, p. 44). Some analytical categories, such as social distance, point of view, and modality fall under this category. Van Leeuwen provides an example of a visual qualitative and quantitative analysis on page 98-99.
Iconography, the second form of analysis, utilizes three layers of image meaning: representational meaning, iconographical symbolism, and iconological symbolism. ‘Representaitonal meaning’ is close to ‘denotation’ in that it is the recognition of what is represented on the basis of our past experience and prior knowledge (Panofsky, 1970). ‘Iconographical symbolism, the ‘object-signs’not only denote a particular object but also the ideas or concepts attached to it. Panofsky called it ‘secondary or conventional subject matter’ (1970). Conventions of the past are more recognizable than developing conventions. ‘Iconological symbolism’ is what could be called ideological meaning or what Panofsky explained, “to ascertain those underlying principles which reveal the basic attitude of a nation, a period, a class, a religion, or a philosophical persuasion’ (1970, 55).
Van Leeuwen also discriminated between Barthian visual semiotics and iconography in that iconography uses both textual analysis and contextual research. Representational meaning is determined by the following: the title indicates who or what is represented. The identification of the represented can be done on the basis of personal experience, on the basis of background research, through reference to other pictures, or on the basis of verbal descriptions.
In terms of symbolism, van Leeuwen distinguished between abstract symbols (abstract shapes with meaning, like crosses) and figurative symbols (represented people, places or things with symbolic value). Figurative symbols are often seen as natural. Additionally textual and contextual arguments are used that give ‘pointers’ to tell viewers how to interpret an image. Hermeren (1969) discussed four kinds of pointers: a) the symbolic image is presented with more than normal care and detail or given a prominent position, or made more conspicuous through lighting, tone, color, etc.; b) someone in the picture points at the image and gestures; c) the motif seems out of place; or d) the motif contravenes the laws of nature. Moving from iconographical to iconological symbolism, we move from identifying conventional associated meanings to interpretation. These interpretations depend on ‘something more than a familiarity with specific themes or concepts as transmitted through literary sources’. Instead, it requires ‘a mental faculty comparable to that of the diagnotisician—a vaculty which I cannot describe better than by the rather discredited term “synthetic intuition” ‘ (1970, p. 64).
Both methods of interpretation provide arguments for using representational elements such as: poses and objects and elements of style (angle, focus, lighting). Both systems recognize that symbolism may be open or disguised.
Here is the chapter summary...
Van Leeuwen, T. (2001). Semiotics and iconography. In T. van Leeuwen & C. Jewett (Eds.). Handbook of Visual Analysis (pp. 92-118). London: Sage.
In this book chapter, van Leeuwen discusses two approaches to visual analysis: visual semiotics of Roland Barthes (1973, 1977) and iconography. He began by discussing how the two approaches search for the meaning of representation and the question of the hidden meanings of images. However, while the Barthes notion studies the image, treating cultural meanings as a given currency shared by all in a particular culture; iconography also attends to the context and how and why cultural meanings came about historically.
In Barthes’ semiotics, the key is the layering of meaning; the first meaning is the denotation (who or what is depicted) and the second layer is the connotation (what ideas and values are expressed through what is represented and how it is represented). For Barthes, denotation is relatively simple. Perceiving photographs is close to perceiving reality because they provide a point by point reference in terms of denotation. The first layer of interpretation is to simply recognize what we already know. Although denotation is partially “up to the eye of the beholder” it also depends on the context. These pointers relate to problems inherent in a Barthian description of visual denotation, factors which can change the meaning: categorization (including the use of captions); groups vs individuals (can have a similar effect); distancing (zooming); and surrounding text (or pictures).
The second layer of meaning, according to Barthes, is connotation—the layer of ideas and values, what things ‘stand for’ or ‘are signs of’. According to Barthes, this idea is already established as part of a cultural norm. For example, specific photographic techniques (zoom, shutter speed, effects) have been defined by Barthes as ‘myths’ in that they are first very broad concepts but they link together everything associated with a single entity. These are also ideological meanings, serving the status quo or the interests of those in power. Barthes further described the unwritten ‘dictionary’ of poses that can color meanings; he described the posing of objects where meaning comes from the object photographed as a ‘lexicon’. However, the specific parts of images are not simply a series of discontinuous ‘dictionary entries” but Barthes also reads them together as a “discursive reading of object-signs’ (1977, p. 24). Therefore, there is a ‘syntax’ because the ‘signifier of connotation is no longer to be found at the level of any one of the feragments of the sequence but at that…of the concatenation’ (1977, p. 24).
Connotation can also come through the style of artwork or photogenia, the techniques of photography such as ‘framing, distance, lighting, focus, speed’ (1977, p. 44). Some analytical categories, such as social distance, point of view, and modality fall under this category. Van Leeuwen provides an example of a visual qualitative and quantitative analysis on page 98-99.
Iconography, the second form of analysis, utilizes three layers of image meaning: representational meaning, iconographical symbolism, and iconological symbolism. ‘Representaitonal meaning’ is close to ‘denotation’ in that it is the recognition of what is represented on the basis of our past experience and prior knowledge (Panofsky, 1970). ‘Iconographical symbolism, the ‘object-signs’not only denote a particular object but also the ideas or concepts attached to it. Panofsky called it ‘secondary or conventional subject matter’ (1970). Conventions of the past are more recognizable than developing conventions. ‘Iconological symbolism’ is what could be called ideological meaning or what Panofsky explained, “to ascertain those underlying principles which reveal the basic attitude of a nation, a period, a class, a religion, or a philosophical persuasion’ (1970, 55).
Van Leeuwen also discriminated between Barthian visual semiotics and iconography in that iconography uses both textual analysis and contextual research. Representational meaning is determined by the following: the title indicates who or what is represented. The identification of the represented can be done on the basis of personal experience, on the basis of background research, through reference to other pictures, or on the basis of verbal descriptions.
In terms of symbolism, van Leeuwen distinguished between abstract symbols (abstract shapes with meaning, like crosses) and figurative symbols (represented people, places or things with symbolic value). Figurative symbols are often seen as natural. Additionally textual and contextual arguments are used that give ‘pointers’ to tell viewers how to interpret an image. Hermeren (1969) discussed four kinds of pointers: a) the symbolic image is presented with more than normal care and detail or given a prominent position, or made more conspicuous through lighting, tone, color, etc.; b) someone in the picture points at the image and gestures; c) the motif seems out of place; or d) the motif contravenes the laws of nature. Moving from iconographical to iconological symbolism, we move from identifying conventional associated meanings to interpretation. These interpretations depend on ‘something more than a familiarity with specific themes or concepts as transmitted through literary sources’. Instead, it requires ‘a mental faculty comparable to that of the diagnotisician—a vaculty which I cannot describe better than by the rather discredited term “synthetic intuition” ‘ (1970, p. 64).
Both methods of interpretation provide arguments for using representational elements such as: poses and objects and elements of style (angle, focus, lighting). Both systems recognize that symbolism may be open or disguised.
Here are summaries of the key points of hte following books. First I summarized the gramar of visual design by Kress and vanLeeuwen and then I summarized key concepts in their Multimodality book. One important concept that Kress aludes to in both books as well as in literacy in the new media age is the concept of "readng path". I've been thinking about reading path in terms of how the students are reading films in the camp. It seems as if there are multiple paths. For example, students read temporally. They also read the images as spatial. They also look for changes as the images move. They also look for changes in expressions or uses of specific shots to depict meaning. Hmmm. What does this mean?
Kress, G., & van Leeuwen, T. (2006). Reading Images: The Grammar of Visual Design: Second Edition. New York: Routledge.
In Reading Images, Kress and van Leeuwen provided a model for visual grammar. A grammar, they pointed out, is an inventory of observed regularities, used as a means of representation; not just a delineation of rules and regulations of normative correctness. They noted that the grammar of verbal texts and visual images have developed side by side. These similarities, however, should not lead one to expect a specific grammar for visual images as is found for linguistic texts.
Kress & van Leeuwen further discuss ground their theory in systemic functional linguistics. Here are some of the main features of the theory of visual grammar:
1) Narrative in visual representations: A vector is needed to make a proposition in visual media. A vector is a line or implied line that suggests direction. Elements of a composition are called ‘participants’, the participant from which a vector departs is an ‘actor’, and the arrival point is the ‘goal’. The meaning is a transaction; if this meaning is reversible it is called an interactive transaction. The geometrics of such relationships are sources of meaning. Lack of a clear ‘reading path’ can lead to ambiguity. A summary of realizations can be found on page 74-75.
2) Conceptual Representations:
• Classification: Symbolism or shape can be added to these diagrams. Vectors can also be evident in diagrams such as flow charts
• Analytic: Relation of a whole and parts that give ‘possessive attributes’ to the whole. Obvious examples are bar charts and circuit drawings, but portraits can also be structured in this way
3) Representation and interaction: The first direct gaze from the representation of a human out to a viewer is attributed to Van Eyck (1433). This is a power relationship, a powerful way of addressing the viewer—the direct gaze. Other directions of gaze are symbolic. If the subject is looking up, the subject is inferior. If the subject is looking down, she is superior. A level gaze denotes equality.
4) Modality: we prioritize an image by modality markers embedded within. In the west, high modality is signified by realism (truth). In other cultures, it may be more symbolic (religious). Markers of realism can be: detail, depth, quality of material, illumination, color, and craft design skill. Different areas of culture and different “subject” area discourses may have different coding orientations. For example, in areas of science, the modality code is the blueprint; whereas, in advertising, the modality code may require bright colors. In art, modality becomes a play of signifiers; complex and often esoteric relations between modality markers often provide an intertextual high modality. Modality is also conveyed by authenticity.
5) Composition: provides an integration through symbolic meanings of position, weight, and framing. Realizations include: centered, polarized, triptych, circular, margin, mediator, given, new, ideal, real, salience, disconnection, connection.
• Left and right denote the ‘given’ and the ‘new’ due to the broad convention in the West that relates to our custom of reading left to right. The eye tends to start at the left of the image and move right. (note: this is often different with pre-readers—salience plays a role in their visual literacy).
• Top and bottom denote ideal and real, promise and product, emotive and practical, head and foot.
• The center is the place of the ruler, harmony and symmetry. In western art and graphic design, the use of a geometrically centered image is considered naïve.
• Weight includes: size, focus, contrast, and foregrounding. The weightings of these aspects of image have a center of gravity.
• Framing: may be explicit or implied. Lack of framing implies a group identity; whereas, framing individuates.
• Rhythm: in film—time based image. In a book, flicking through a page. Rhythm in multimedia could also refer to zooms, pans, and transitions in a sequence of time.
• Salience is the degree to which an element draws attention to itself due to: size, place, overlapping of elements (color, tone, sharpness, definition,etc.).
• Connection/disconnection: the degree to which element is connected or visually separated through framing, empty space, vectors and differences/ similarities in color, shape.
6) Materiality and Meaning: Inscription: Brush strokes…also the hand made marks, marks recorded with technology, and marks synthesized in technology. Color is a semiotic mode, which carries meanings of its own (including cross cultural variations). Color also has emotions attached with it. Additionally, in certain arenas, color has textual connotations (blue text on computer symbolizes hyperlink). Color coordination can promote cohesion. Distinctive features to the semiotics of color include: value, saturation, purity, modulation, differentiation, and hue. Finally, color schemes can provide significant design qualities.
Article 4
Kress, G., & van Leeuwen, T. (2001). Multimodal Discourse: The Modes and Media of Contemporary Communication. New York: Oxford University Press..
In Multimodal Discourse, Kress and van Leeuwen outline a theory of communication for the age of interactive multimedia. Beginning with the concept of ‘design’ they outline an approach to social discourse where color and font play a role equal to language. They defined multimodality as the “use of several semiotic modes in the design of a semiotic product or event, together with the particular way in which these modes are combined—they may for instance reinforce each other (say the same thing in different ways), fulfill complementary roles…or be hierarchically ordered” (eg. action films where action is dominant and music adds to the presence). Furthermore, they articulated communication as a “process in which a semiotic product or event is both articulated or produced and interpreted or used” (p. 111).
In the final chapter, where they delineate a multimodal theory of communication, which concentrates on two things: a) the semiotic resources of communication (modes and media); and b) the communicative practices in which resources are used (discursive, interpretive, production, design, and/or distribution practices). They key point they made is that meaning is made “not only with a multiplicity of semiotic resources, in a multiplicity of modes and media but also at different ‘places’ within each of these” (p. 111).
One of the key elements in the novelty of multimedia discourse is the as pect of design. Discourses can be realized in different modes; each mode adds layers of meaning. Design consists of a ‘blueprint”, an overall spatial schema of a page with bits of information. This could also be used in connection with other modes (text, color, spatial arrangement, font, etc.). Therefore, on a multimodal “page” information is spatially, rather than sequentially organized. Spatial order—where elements are placed, how salient they are, in which ways they are framed, how they are connected, color harmony/disharmony—becomes a key aspect to the visual schema. Unlike a traditional text, where the reader follows a sequential order, in a visual text, the importance is suggested by hierarchies of salience.
Further elements of design are related to production and distribution. For example, the way in which separate bits of information are produced (with boxes on pages—such as a website) adds to the meaning. In this way, typography also becomes significant. The use of a handwriting font depicts a personal message, something that has become conventional. Using the premise of “Provenance” --“the idea that signs may be imported from one context into another in order to signify the ideas and values associated with that other context by those who do the importing” (p. 23)--the use of handwriting is a sign of personal address that has become conventional. However, it has not been grammaticalised; typography is still ‘lexical’ and works through connotation. Therefore, the meaning of the font is different than the meaning of the actual text, which follows grammatical rules.
Kress, G., & van Leeuwen, T. (2006). Reading Images: The Grammar of Visual Design: Second Edition. New York: Routledge.
In Reading Images, Kress and van Leeuwen provided a model for visual grammar. A grammar, they pointed out, is an inventory of observed regularities, used as a means of representation; not just a delineation of rules and regulations of normative correctness. They noted that the grammar of verbal texts and visual images have developed side by side. These similarities, however, should not lead one to expect a specific grammar for visual images as is found for linguistic texts.
Kress & van Leeuwen further discuss ground their theory in systemic functional linguistics. Here are some of the main features of the theory of visual grammar:
1) Narrative in visual representations: A vector is needed to make a proposition in visual media. A vector is a line or implied line that suggests direction. Elements of a composition are called ‘participants’, the participant from which a vector departs is an ‘actor’, and the arrival point is the ‘goal’. The meaning is a transaction; if this meaning is reversible it is called an interactive transaction. The geometrics of such relationships are sources of meaning. Lack of a clear ‘reading path’ can lead to ambiguity. A summary of realizations can be found on page 74-75.
2) Conceptual Representations:
• Classification: Symbolism or shape can be added to these diagrams. Vectors can also be evident in diagrams such as flow charts
• Analytic: Relation of a whole and parts that give ‘possessive attributes’ to the whole. Obvious examples are bar charts and circuit drawings, but portraits can also be structured in this way
3) Representation and interaction: The first direct gaze from the representation of a human out to a viewer is attributed to Van Eyck (1433). This is a power relationship, a powerful way of addressing the viewer—the direct gaze. Other directions of gaze are symbolic. If the subject is looking up, the subject is inferior. If the subject is looking down, she is superior. A level gaze denotes equality.
4) Modality: we prioritize an image by modality markers embedded within. In the west, high modality is signified by realism (truth). In other cultures, it may be more symbolic (religious). Markers of realism can be: detail, depth, quality of material, illumination, color, and craft design skill. Different areas of culture and different “subject” area discourses may have different coding orientations. For example, in areas of science, the modality code is the blueprint; whereas, in advertising, the modality code may require bright colors. In art, modality becomes a play of signifiers; complex and often esoteric relations between modality markers often provide an intertextual high modality. Modality is also conveyed by authenticity.
5) Composition: provides an integration through symbolic meanings of position, weight, and framing. Realizations include: centered, polarized, triptych, circular, margin, mediator, given, new, ideal, real, salience, disconnection, connection.
• Left and right denote the ‘given’ and the ‘new’ due to the broad convention in the West that relates to our custom of reading left to right. The eye tends to start at the left of the image and move right. (note: this is often different with pre-readers—salience plays a role in their visual literacy).
• Top and bottom denote ideal and real, promise and product, emotive and practical, head and foot.
• The center is the place of the ruler, harmony and symmetry. In western art and graphic design, the use of a geometrically centered image is considered naïve.
• Weight includes: size, focus, contrast, and foregrounding. The weightings of these aspects of image have a center of gravity.
• Framing: may be explicit or implied. Lack of framing implies a group identity; whereas, framing individuates.
• Rhythm: in film—time based image. In a book, flicking through a page. Rhythm in multimedia could also refer to zooms, pans, and transitions in a sequence of time.
• Salience is the degree to which an element draws attention to itself due to: size, place, overlapping of elements (color, tone, sharpness, definition,etc.).
• Connection/disconnection: the degree to which element is connected or visually separated through framing, empty space, vectors and differences/ similarities in color, shape.
6) Materiality and Meaning: Inscription: Brush strokes…also the hand made marks, marks recorded with technology, and marks synthesized in technology. Color is a semiotic mode, which carries meanings of its own (including cross cultural variations). Color also has emotions attached with it. Additionally, in certain arenas, color has textual connotations (blue text on computer symbolizes hyperlink). Color coordination can promote cohesion. Distinctive features to the semiotics of color include: value, saturation, purity, modulation, differentiation, and hue. Finally, color schemes can provide significant design qualities.
Article 4
Kress, G., & van Leeuwen, T. (2001). Multimodal Discourse: The Modes and Media of Contemporary Communication. New York: Oxford University Press..
In Multimodal Discourse, Kress and van Leeuwen outline a theory of communication for the age of interactive multimedia. Beginning with the concept of ‘design’ they outline an approach to social discourse where color and font play a role equal to language. They defined multimodality as the “use of several semiotic modes in the design of a semiotic product or event, together with the particular way in which these modes are combined—they may for instance reinforce each other (say the same thing in different ways), fulfill complementary roles…or be hierarchically ordered” (eg. action films where action is dominant and music adds to the presence). Furthermore, they articulated communication as a “process in which a semiotic product or event is both articulated or produced and interpreted or used” (p. 111).
In the final chapter, where they delineate a multimodal theory of communication, which concentrates on two things: a) the semiotic resources of communication (modes and media); and b) the communicative practices in which resources are used (discursive, interpretive, production, design, and/or distribution practices). They key point they made is that meaning is made “not only with a multiplicity of semiotic resources, in a multiplicity of modes and media but also at different ‘places’ within each of these” (p. 111).
One of the key elements in the novelty of multimedia discourse is the as pect of design. Discourses can be realized in different modes; each mode adds layers of meaning. Design consists of a ‘blueprint”, an overall spatial schema of a page with bits of information. This could also be used in connection with other modes (text, color, spatial arrangement, font, etc.). Therefore, on a multimodal “page” information is spatially, rather than sequentially organized. Spatial order—where elements are placed, how salient they are, in which ways they are framed, how they are connected, color harmony/disharmony—becomes a key aspect to the visual schema. Unlike a traditional text, where the reader follows a sequential order, in a visual text, the importance is suggested by hierarchies of salience.
Further elements of design are related to production and distribution. For example, the way in which separate bits of information are produced (with boxes on pages—such as a website) adds to the meaning. In this way, typography also becomes significant. The use of a handwriting font depicts a personal message, something that has become conventional. Using the premise of “Provenance” --“the idea that signs may be imported from one context into another in order to signify the ideas and values associated with that other context by those who do the importing” (p. 23)--the use of handwriting is a sign of personal address that has become conventional. However, it has not been grammaticalised; typography is still ‘lexical’ and works through connotation. Therefore, the meaning of the font is different than the meaning of the actual text, which follows grammatical rules.
Essential Articles
Here are my summaries of some essential articles for our research
deb
Callow, J. (2003, April). Talking about visual texts with students. Reading Online, 6(8), Available: http://www.readingonline.org/articles/art_index.asp?HREF=callow/index.html
Using the multiliteracies visual design concepts of Kress and van Leeuwen (1996) Callow investigated what metalanguage students used when talking about visual aspects of their multimedia texts.
Two Australian teachers, each working with 25 6th grade (11-year old) students participated in the study. The science and English curriculum were combined with the school’s computer technology program to create the six week unit of study. The context of the study consisted of students investigating food production and working together to create PowerPoint slide presentations, which integrated text and image. Working in groups of four or five, researchers provided students with several facts. Students combined the facts; paraphrased; sequenced the information; and included text, images, sound, and animation as part of their multimedia presentations.
Researchers used a qualitative approach (Merriam, 1998). Sources of data included field notes, discussions with teachers, collection of work samples, and group interviews with students about their work. Discussions with students included their comments on the features of: image, color, selection, salience, and layout. In addition, the researchers also asked the students evaluative questions about the effectiveness of their use of the visual features in presentations. Student perceptions of what qualities made a good slide show—including features of color, selection, image, salience, and layout—were the main criteria for evaluating presentations.
When asked what makes an effective PowerPoint, the students noted, intuitively: color (15 students), animation (10 students), sounds (8 students), text features (7 students), backgrounds (6 students), and pictures (5 students). However, when asked why they chose a particular element, few students were able to express specific reasons for their choice. Interestingly, students decided that photographs and clipart would be effective in different circumstances. They noted that photographs were “more realistic” and denoted a serious tone; more effective for adults. Clipart, on the other hand, would be an effective visual for younger children or a less serious tone.
In terms of metalanguage, students discussed many features of design in terms of comparing their work to books or other visually enhanced texts. Although the students were unable to discuss the elements in terms of a specific metalanguage, they were able to justify whey they made particular choices.
The strength of this article is that it investigates an issue essential to students competing in a technological global economy: the creation of effective presentations. Although written texts remain important means of communication, final presentations in businesses increasingly include multimodal “texts.”
I also found weaknesses in this article. First of all, it would have been helpful to see more examples of student work or vignettes that detailed a presentation. In addition, the researcher noted the nature of PowerPoint as linear in nature as opposed to weblike. However, using PowerPoint to create a museum kiosk-like presentation, students can easily add hyperlinks with buttons within the show, between shows, and with online documents. Perhaps at the time of the study, the version of PowerPoint did not include these features. On the other hand, few people are familiar with the interactive features of PowerPoint, including action buttons and custom animation.
I found the verbal reports of effectiveness compelling—so compelling that I plan to use this article as a major element in my dissertation. Another strength noted was that this study was easy to follow and included detailed descriptions of the presentations. However, I would have liked to see more details about the actual creation of PowerPoint process.
Implications of this article include the fact that when working with visual and multimodal texts, students need to understand technical skills, but also how these elements create meaning. In particular, they must understand how the features of color, salience, images, and layout design impact the effectiveness of the presentation. Educators need to understand the use and meaning-creation potential of these features.
Integration of multiliteracies in new learning environments is a new and exciting concept; one I intend to study in detail over the next couple of years. With the advent of social networking sites and video sharing (YouTube) anyone can publish a multimedia message. No longer are elements of design strictly in the hands of the professionals; amateurs can use simple design tools to create their own messages. Schools must keep in touch with reality literacy. What types of literacies are effective now and what types of literacies will be effective in the future?
Article 2
Semali, L., & Fueyo, J. (2001, December/January). Transmediation as a metaphor for new literacies in multimedia classrooms. Reading Online, 5(5). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=semali2/index.html
This research used a case study approach to investigate transmediation in terms of exemplars noted in the classroom situation.
In the article, the authors first defined key terms:
• Multiple sign systems: art, movement, sculpture, dance, music, words, digital, and multimedia.
• Transmediation: responding to cultural texts in a range of multiple sign systems.
• New literacies: (the ability to read, analyze, interpret, evaluate, and produce communication in a variety of textual environments and multiple sign systems” (p. 1).
Then, following a well-developed literature review, the authors discussed their central concerns:
• “What is the relationship between what students know and the signs they encounter in their classrooms (about race, class, gender, disability, and sexual orientation)?
• What meaning do they make of these semiotic systems in their literacy practices?” (p. 3).
The authors provided some detailed cases, which illustrated exemplars of transmediation activities. Then they discussed the first scenario in terms of semiotics. However, when I turned from page five to page six, I thought a major part of the article was missing. The authors simply noted, “equally, the other scenarios aim to open our eyes to a variety of symbolisms, codes, and conventions…” They failed to analyze the other scenarios in terms of sign systems, transmediation, and new literacies. What began as a very exciting article, fell short of satisfying my desire to learn more about how the real-life cases related to background theory.
Despite the weaknesses in the analysis, I found the format clear and easy to follow. I plan to use a similar format to write up results for a qualitative study on multimedia creations.
Article 5
Muffoletto, R. (2001, March). An inquiry into the nature of Uncle Joe’s representation and meaning. Reading Online, 4(8). Available http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/muffoletto/index.html
In this article, Muffoletto addressed critical or reflective visual literacy. In terms of visual literacy, Muffoletto noted that a diversity of meanings has traditionally been devalued in classroom settings. Reflective visual literacy empowers students to understand the power of the image and to evaluate images based on their personal experiences. Comprehending the process of reflective visual literacy is only possible if teachers incorporate the notion of multiple perspectives into their daily teaching. Using photo essays, students should be allowed to express their own voices and describe their own perceptions of how the image reflected their experience. The ultimate power of reflective visual literacy is that it situates visual representations and their interpretation (construction of meaning) in a context that raises issues about benefit and power.
Muffoletto provided an extensive discussion of how individuals perceive image as text. Images—and our perceptions thereof—are not natural. We see what our eye and brain let us see. We experience the world through a reality that has been constructed for us through social and biological limitations. “Like texts, visual representations (visual texts) are the result off ideologically formed intentional acts…the visual text, as a representation that stands in place of an object or concept, requires a social codification—the construction of meaning through a system of codes used by the author and reconstructed by the reader.”
Muffoletto discussed the “fluid representational nature of icons, signs, and symbols” he found in photographs. At one moment the picture is an icon (this is a picture of…), a sign (usually I associate this with…), and a symbol (more complex associations. Meanings are assigned to the image by individuals who are members of historical social communities (Fish 1980)—including gender, race, religion, cultural, economic perspectives.
Muffoletto further grounded the concept of visual literacy within Semiotics, the study of signs, which could be a useful tool for understanding social and historical construction of meaning. Semoitics positions representation from three perspectives: icons, signs, and symbols. An Icon, he noted, is a representation with a strong perceptual relationship and the object for which it stands (Barthes, 1964). Signs are conventions—“agreed upon abstractions that we associate with some thing or concept.” Letters, colors, shapes, or images on a screen mean nothing by themselves. We need to organize and assign meaning to them. Symbols (Langer, 1976) are instruments of thought; they work differently from icons and signs because rather than corresponding directly to objects/concepts, we use conceptual frameworks. For example, a star, or image of star may refer to a religion, but it also symbolizes all the particular religion stands for (Wollen, 1969).
Reading implies intention to construct meaning. From modernist perspective the meaning of the texts lies within the text itself, placed there by the author. The role of the reader is to find the truth. From a postmodern perspective, the meaning is a result of interaction between reader and text. The meaning is constructed by author and reader. Muffoletto stated that constructing meaning can be seen through two different lenses: politics and pedagogy: Traditionally teachers have been responsible to give the “official” truth or meaning of texts. Standardized tests emphasize this. Diversity of meaning is devalued. These practices are a result of seeing through only one lens. Reflective and critical analysis practices allow a democratic reconstruction of images.
The principles of critical visual literacy are essential in an ever-increasingly visual world faced by children in their out-of-school literacy contexts. Muffoletto noted that the foundations of reflective visual literacy require that students value the differences of understanding and expression involved with the construction and deconstruction of all texts as social products. Furthermore, as technology changes, our understanding of “reality” changes. Muffoletto stated that educators must consider new literacies in terms of power relationships and how meaning is constructed.
Article 8
Messaris, P. (2001). New literacies in action: Visual education. Reading Online, 4(7). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/action/messaris/index.html
Messaris, a leading researcher and theorist in the area of visual literacy, argued for a deepening of visual literacy education beyond a critical analysis of visual texts. He noted that the process of creating visual images contributes more to students’ understanding of the multiplicity of visual information to which they are exposed in a multimedia saturated world. However, despite the exposure to media, Messaris asked, are students indeed “media savvy”? Furthermore, he noted that one cannot assume that the consumption of visual images leads to improvement in a student’s creative abilities.
Then Messaris went on to describe the theoretical implications of the connection between visual creativity and greater cognition as defined by “spatial intelligence” (Gardner, 1983). Spatial intelligence, he noted, is the “process of forming mental representations of three-dimensional reality as a basis for understanding one’s environment and interacting with it effectively. It is a type of intelligence crucial for success in professions such as architecture or carpentry, but it is also a vital ingredient of any person’s everyday physical activities.” Messaris provided examples of how a film editor uses multiple devices for constructing meaning, including zooms, pans, transitions, focus, spatial layout, angle, etc.
Finally, Messaris discussed the implications of visual literacy for education. He noted that students must learn to create visual meaning, not just consume it. Visual connections come easy to experienced viewers. However, the ability to create multimedia creations, combining images, does not come so easily. It is a form of knowledge of a visual grammar that comes through active learning. Through the act of communicating through images, students move beyond seeing media as a “window on reality” to a more enlightened state where they are able to construct new realities through the manipulation of visual conventions. The higher order spatial and analogical thinking skills used in film editing, Messaris argued, carry over to other realms of experience; therefore, learning these skills should “be considered the core objective of an actively oriented visual curriculum” (p. 8).
Lemke, J. (2006). Toward critical multimedia literacy: Technology, Research, and Politics. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 3-14).
In school, students are taught to carefully analyze and deconstruct text. However, most often, the accompanying visual images are ignored. Although multimedia texts outweigh monomodal (writing only) texts, school-based curriculums tend to ignore visual literacy. With the rise of the World Wide Web, “reading” images has become an even more essential skill in decoding multimodal texts, such as web pages.
Lemke argued that we need a “broader definition of literacy itself, one that includes all literate practices, regardless of medium” (p. 4). Texts, he stated, are converging; television programs have websites and so do popular books, movies, and video games. For example, Harry Potter, which began as a print media phenomenon, has moved to websites, movies, television commentaries, and even video games. Similar content, images, and textual themes are distributed over a variety of media.
In light of the complexity of today’s literate activities, Lemke discussed a necessity of conceptual frameworks to help us “cope with the complexity and the novelty of these new multimedia constellations” (p. 5). The field of social semiotics (also known as critical discourse studies, critical media studies, and critical cultural studies) has been developing key concepts. The core idea of semiotics is that all human meaning-making shares a number of features. In multimedia semiotics, these common features form the basis on which the integration across media is possible. The fundamental unit is the meaning-making practice; whether that practice is sued to create or analyze such texts. The model of meaning-making applies across disciplines and across semiotic systems. In fact, we can never make meaning with one mode alone, Lemke argues,
If you speak it, your voice also makes nonlinguistic meaning by its timbre and tone, identifying you as the speaker, telling something about your physical and emotional state, and much else. If you write it, your orthography presents linguistic meaning separately from additional visual meanings (whether in your handwriting or choice of font) (p. 5).
All communication, Lemke noted, is multi-modal communication. He further defined multimodality as “the combination or integration of various sign systems or semiotic resource systems, such as language, gesture, mathematics, music, etc.” (p. 5). The resulting product is a form of gestalt—the whole not only greater than the sum of its parts, but the way in which meaning is represented through a variety of modes effectively is much more important than any one mode.
For example, when we interact with a website, we make trajectories across links that carry us to a wide variety of different genres and different media. We not only surf within sites, we surf between sites; often discovering video, audio, and interactive media that accompany more traditional words and static images. Lemke notes that we are learning “to make meaning along these traversals” that are “relatively free of the constraints of conventional genres.” Additionally, the intertexts create meanings of their own. As Lemke noted, “as our culture increasingly enmeshes us in constellations of tgextual, visual, and other themes that are designed to be distributed across multiple media and activities…these cross-activity and cross-medium connections tend to become coherently structured” (p. 7).
In terms of these multimedia texts, Lemke noted the necessity of key questions that should be answered as we prepare to teach critical multimedia literacy. In most multimodal presentations, different modes are used to represent meaning. One cannot simply deconstruct the verbal message and obtain the whole meaning. Likewise, one cannot simply decode the visual design elements. These modes must function together. Techniques of multimodal analysis must “show how text and images are selectively designed to reinforce one another” (p. 8). No single meaning can be projected across a single modality. The sign from each medium can portray a different message; some of these may create perverse or divergent meanings.
As educators, we must teach students to become specialists in critical multimedia literacy in order for them to make free and democratic choices. To be critical, Lemke notes, is not “just ot be skeptical or to identify the workings of covert interests…it is also to open up alternatives, to provide the analytical basis for the creation of new kinds of meanings” (p. 13). A true critical discourse helps students not only to critique, but to create, author, and produce multimedia texts.
My Commentary: In the new world of Web 2.0—a world where end-users, consumers, teachers, and students are creating content for themselves and peers—self-generated online texts can be created as word documents, audio files, or videos. Free video creation software, such as Windows Movie Maker, allow any individual to create and upload a fully edited, semi-professional movie to the Internet in the comfort of their own home. YouTube is a prime example of this type of opportunity. Amateurs are able to create multimedia and showcase it to an audience; this process taking, at times, no more than an hour. So how will these new multimedia literacies be defined?
Article 10
Hobbs, R. (2006). Multiple visions of multimedia literacy: Emerging areas of synthesis. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 15-28).
Hobbs, first of all, discussed the impact of “screen activity” on American children and teens, who spend, “an average of eight hours per day using media, including television, videogames, the Internet, newspapers, magazines, films, radio, recorded music, and books” (p. 15; as cited in Kaiser Family Foundation, 2001). In a world constantly bombarded with new and changing literacies, children need to be able to find and critique media messages. Educators traditionally rely on textual and language competencies. However, Hobbs noted that it is also essential for students to learn to use symbol systems (images, music, sound, motion) as a means of expression and communication. Literacy educators are beginning to recognize that they need to teach students how to read and respond to the array of media technologies in order to prepare them for the 21st century.
Educators no longer own the concept of literacy; academic scholars from a wide variety of disciplines (media studies, psychology, cultural anthropology, communications, history, library and information science, literary theory, linguistics, rhetoric, etc.) have become increasingly interested in how individuals make meaning in reading and composing multimedia texts. As a result, educators are using new literacies terminology, such as: visual literacy, media literacy, critical literacy, information literacy, and technology literacy.
Visual literacy, a field based on nearly 100 years of work by interdisciplinary scholars, has long discussed the importance of visual materials and concepts (such as selection, framing, composition, sequence, and aesthetic dimensions of images) in the classroom. Scholars interested in visual literacy have examined how images are interpreted and understood, how images and text interact in meaning-making, how exposure to visual images affects cognitive development, and how semiotic dimensions can be examined. Learning about the visual conventions of images helps give “readers” a way to analyze texts and “creators” some strategies to enhance their own productions. Texts are only representations of reality and key visual grammars exist cross-culturally in the creation of these texts.
Information literacy has been defined by the American Library Association as the abilities an individual needs to recognize when they need information and who to locate, evaluate and use it. In many instances, however, when information literacy is actually taught in the schools, it is defined as a narrow checklist of specific skills; not as a more critical analysis, a multiplicity of comprehension techniques.
Media literacy educators in the United States have been influenced by the work of British, Canadian, and Australian scholars who have discussed engaging educational practices that teach children to analyze mass media and popular culture. Good media literacy pedagogy stresses: the process of inquiry and situated active learning based on the work of Freire and Macedo.
Critical Literacy arose from traditions in semiotics and cultural studies. Meaning making in a critical literacy arena, involves the social, historical, and political contexts combined with the author’s meaning. Critical, as used by scholars, refers to the recognition of oppression and exploitation embedded in texts. Critical literacy scholars “ explore reading within a sociocultural context [and] examine and understand how various texts, including pictures, icons, and electronic messages (as forms of symbolic expression) are used to influence, persuade, and control people” (p. 19).
After defining the new literacies, Hobbs discussed a model for integrating the conceptual tenets of multimedia literacies. School practices must change to incorporate themes of authors and audiences, meanings and messages, and representations and reality. Although some evidence is emerging from research on multimedia literacies, Hobbs noted that most examinations have looked at a small number of students in a single classroom (Alvermann, Moon, & Hagood, 2001; Anderson, 1983). Some have explored whether students learn the appropriate facts through multimedia (Baron, 1985; Kellly, Bunter & Kelly, 1985) or if a video broadcast affects cognitive or critical literacy skills (Vooijs & Van de Voort, 1993). Recently, some case studies have documented educators’ practices in classrooms (Hart & Suss, 2004; Hart, 1998; Hurrell, 2001;Kist, 2000—note to self, also include Leu book here).
Hobbs noted that further research should continue to explore how and why multiliteracies are incorporated into classroom practices. Furthermore, she noted that educators must be responsive to Masterman (1985) who identified a central outcome for media education: the ability to apply skills and strategies learned in the classroom to everyday life. Such work depends on teachers who have the initiative, creativity, imagination and perseverance to enable students “to develop the competencies they need to be citizens of an information age” (p. 25).
Article 11
Royce, T.D. (2007). Intersemiotic complementarity: A framework for multimodal discourse analysis. In T. D. Royce & W. L. Bowcher (Eds.). New directions in the analysis of multimodal discourse. (pp. 63-109).
Royce noted that the theoretical foundation for multimodal discourse analysis is derived from systemic Functional Linguistic (SFL) view of language as a ‘social semiotic’ (Halliday, 1978). Halliday made four central claims about language: it is functional in terms of what it can do or what can be done with it, it is semantic in that it can be used to make meaning, it is contextual, in that meanings are affected by social and cultural situations, and semiotic in that it is a process of selecting from “the total set of options that constitute what can be meant” (Halliday, 1978, 1985, p. 53). Halliday also identified three types of meanings, which are “metafunctions” that operate simultaneously in the semantics of every language: the ideational metafunction (responsible for “the representation of experience”); the interpersonal metafunction (meaning as a form of action); and the textual metafunction (maintaining relevance to the context). Reading or viewing involves the simultaneous interplay of three elements, which correlate with the metafunctions: represented participants (elements that are actually present in the visual), interactive participants (participants interacting with each other in the act of reading—graphic designer and reader), and the visuals’ coherent structural elements (compositional features such as element of design or layout).
Royce provides a detailed analysis based the intertextuality of these factors on page 68 and 69. The interpretation deals with how visual and verbal modes interact “intersemiotically” with the: identification of participants, represented processes or activities, the circumstances, and the attributes. Teach of these aspects can be discussed in terms of Visual Message Elements. Royce further explained that the same ways that metafunction concepts can be applied to visual modes of communication, so can the analysis of cohension in text by Halliday and Hasan (1985) be used to “explicate the ideational cohesive relations between the modes in a multimodal text” For this purpose, Royce used the following sense relations: Repetition (R) for the repetition of experiential meaning; Synonymy (S) for a similar meaning; Antonymy (A) for an oppositie meaning; Hyphonymy (H) for a general class of something and subclasses; and Meronymy (M) for reference to the whole of something and its parts; and Collocation (C) for words that tend to occur in various subjects (Halliday, 1985).
Furthermore, the examination of the intersemiotic interpersonal features of a multimodal text looks at relations between the visual and viewer and how they are represented (Kress & van Leeuwen, 1996). This can be very important in terms of speech functions as distinguished by Halliday (1985): offer, command, statement, and question. Visual type can also be important, as can the level of involvement by a viewer which is recognized by visual angle or point of view. Power relations between viewers and the represented participants are encoded in the angle between them. The degree of social distance is realized by the size of frame: close up, medium, and long shot. These different kinds of shots parallel the distances people use when they talk face to face (Kress and van Leeuwen, 1990). Relationships can occur when the interpersonal meanings in both visual and verbal modes co-occurring on the same page, related through reinforcement of address and through intersemiotic attitudinal congruence and attitudinal dissonance (modality) relations. Furthermore, relationships can also occur when the compositional meanings are integrated by design features, such as: information value, salience, visual framing, visual synonymy, and potential reading paths.
deb
Callow, J. (2003, April). Talking about visual texts with students. Reading Online, 6(8), Available: http://www.readingonline.org/articles/art_index.asp?HREF=callow/index.html
Using the multiliteracies visual design concepts of Kress and van Leeuwen (1996) Callow investigated what metalanguage students used when talking about visual aspects of their multimedia texts.
Two Australian teachers, each working with 25 6th grade (11-year old) students participated in the study. The science and English curriculum were combined with the school’s computer technology program to create the six week unit of study. The context of the study consisted of students investigating food production and working together to create PowerPoint slide presentations, which integrated text and image. Working in groups of four or five, researchers provided students with several facts. Students combined the facts; paraphrased; sequenced the information; and included text, images, sound, and animation as part of their multimedia presentations.
Researchers used a qualitative approach (Merriam, 1998). Sources of data included field notes, discussions with teachers, collection of work samples, and group interviews with students about their work. Discussions with students included their comments on the features of: image, color, selection, salience, and layout. In addition, the researchers also asked the students evaluative questions about the effectiveness of their use of the visual features in presentations. Student perceptions of what qualities made a good slide show—including features of color, selection, image, salience, and layout—were the main criteria for evaluating presentations.
When asked what makes an effective PowerPoint, the students noted, intuitively: color (15 students), animation (10 students), sounds (8 students), text features (7 students), backgrounds (6 students), and pictures (5 students). However, when asked why they chose a particular element, few students were able to express specific reasons for their choice. Interestingly, students decided that photographs and clipart would be effective in different circumstances. They noted that photographs were “more realistic” and denoted a serious tone; more effective for adults. Clipart, on the other hand, would be an effective visual for younger children or a less serious tone.
In terms of metalanguage, students discussed many features of design in terms of comparing their work to books or other visually enhanced texts. Although the students were unable to discuss the elements in terms of a specific metalanguage, they were able to justify whey they made particular choices.
The strength of this article is that it investigates an issue essential to students competing in a technological global economy: the creation of effective presentations. Although written texts remain important means of communication, final presentations in businesses increasingly include multimodal “texts.”
I also found weaknesses in this article. First of all, it would have been helpful to see more examples of student work or vignettes that detailed a presentation. In addition, the researcher noted the nature of PowerPoint as linear in nature as opposed to weblike. However, using PowerPoint to create a museum kiosk-like presentation, students can easily add hyperlinks with buttons within the show, between shows, and with online documents. Perhaps at the time of the study, the version of PowerPoint did not include these features. On the other hand, few people are familiar with the interactive features of PowerPoint, including action buttons and custom animation.
I found the verbal reports of effectiveness compelling—so compelling that I plan to use this article as a major element in my dissertation. Another strength noted was that this study was easy to follow and included detailed descriptions of the presentations. However, I would have liked to see more details about the actual creation of PowerPoint process.
Implications of this article include the fact that when working with visual and multimodal texts, students need to understand technical skills, but also how these elements create meaning. In particular, they must understand how the features of color, salience, images, and layout design impact the effectiveness of the presentation. Educators need to understand the use and meaning-creation potential of these features.
Integration of multiliteracies in new learning environments is a new and exciting concept; one I intend to study in detail over the next couple of years. With the advent of social networking sites and video sharing (YouTube) anyone can publish a multimedia message. No longer are elements of design strictly in the hands of the professionals; amateurs can use simple design tools to create their own messages. Schools must keep in touch with reality literacy. What types of literacies are effective now and what types of literacies will be effective in the future?
Article 2
Semali, L., & Fueyo, J. (2001, December/January). Transmediation as a metaphor for new literacies in multimedia classrooms. Reading Online, 5(5). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=semali2/index.html
This research used a case study approach to investigate transmediation in terms of exemplars noted in the classroom situation.
In the article, the authors first defined key terms:
• Multiple sign systems: art, movement, sculpture, dance, music, words, digital, and multimedia.
• Transmediation: responding to cultural texts in a range of multiple sign systems.
• New literacies: (the ability to read, analyze, interpret, evaluate, and produce communication in a variety of textual environments and multiple sign systems” (p. 1).
Then, following a well-developed literature review, the authors discussed their central concerns:
• “What is the relationship between what students know and the signs they encounter in their classrooms (about race, class, gender, disability, and sexual orientation)?
• What meaning do they make of these semiotic systems in their literacy practices?” (p. 3).
The authors provided some detailed cases, which illustrated exemplars of transmediation activities. Then they discussed the first scenario in terms of semiotics. However, when I turned from page five to page six, I thought a major part of the article was missing. The authors simply noted, “equally, the other scenarios aim to open our eyes to a variety of symbolisms, codes, and conventions…” They failed to analyze the other scenarios in terms of sign systems, transmediation, and new literacies. What began as a very exciting article, fell short of satisfying my desire to learn more about how the real-life cases related to background theory.
Despite the weaknesses in the analysis, I found the format clear and easy to follow. I plan to use a similar format to write up results for a qualitative study on multimedia creations.
Article 5
Muffoletto, R. (2001, March). An inquiry into the nature of Uncle Joe’s representation and meaning. Reading Online, 4(8). Available http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/muffoletto/index.html
In this article, Muffoletto addressed critical or reflective visual literacy. In terms of visual literacy, Muffoletto noted that a diversity of meanings has traditionally been devalued in classroom settings. Reflective visual literacy empowers students to understand the power of the image and to evaluate images based on their personal experiences. Comprehending the process of reflective visual literacy is only possible if teachers incorporate the notion of multiple perspectives into their daily teaching. Using photo essays, students should be allowed to express their own voices and describe their own perceptions of how the image reflected their experience. The ultimate power of reflective visual literacy is that it situates visual representations and their interpretation (construction of meaning) in a context that raises issues about benefit and power.
Muffoletto provided an extensive discussion of how individuals perceive image as text. Images—and our perceptions thereof—are not natural. We see what our eye and brain let us see. We experience the world through a reality that has been constructed for us through social and biological limitations. “Like texts, visual representations (visual texts) are the result off ideologically formed intentional acts…the visual text, as a representation that stands in place of an object or concept, requires a social codification—the construction of meaning through a system of codes used by the author and reconstructed by the reader.”
Muffoletto discussed the “fluid representational nature of icons, signs, and symbols” he found in photographs. At one moment the picture is an icon (this is a picture of…), a sign (usually I associate this with…), and a symbol (more complex associations. Meanings are assigned to the image by individuals who are members of historical social communities (Fish 1980)—including gender, race, religion, cultural, economic perspectives.
Muffoletto further grounded the concept of visual literacy within Semiotics, the study of signs, which could be a useful tool for understanding social and historical construction of meaning. Semoitics positions representation from three perspectives: icons, signs, and symbols. An Icon, he noted, is a representation with a strong perceptual relationship and the object for which it stands (Barthes, 1964). Signs are conventions—“agreed upon abstractions that we associate with some thing or concept.” Letters, colors, shapes, or images on a screen mean nothing by themselves. We need to organize and assign meaning to them. Symbols (Langer, 1976) are instruments of thought; they work differently from icons and signs because rather than corresponding directly to objects/concepts, we use conceptual frameworks. For example, a star, or image of star may refer to a religion, but it also symbolizes all the particular religion stands for (Wollen, 1969).
Reading implies intention to construct meaning. From modernist perspective the meaning of the texts lies within the text itself, placed there by the author. The role of the reader is to find the truth. From a postmodern perspective, the meaning is a result of interaction between reader and text. The meaning is constructed by author and reader. Muffoletto stated that constructing meaning can be seen through two different lenses: politics and pedagogy: Traditionally teachers have been responsible to give the “official” truth or meaning of texts. Standardized tests emphasize this. Diversity of meaning is devalued. These practices are a result of seeing through only one lens. Reflective and critical analysis practices allow a democratic reconstruction of images.
The principles of critical visual literacy are essential in an ever-increasingly visual world faced by children in their out-of-school literacy contexts. Muffoletto noted that the foundations of reflective visual literacy require that students value the differences of understanding and expression involved with the construction and deconstruction of all texts as social products. Furthermore, as technology changes, our understanding of “reality” changes. Muffoletto stated that educators must consider new literacies in terms of power relationships and how meaning is constructed.
Article 8
Messaris, P. (2001). New literacies in action: Visual education. Reading Online, 4(7). Available: http://www.readingonline.org/newliteracies/lit_index.asp?HREF=/newliteracies/action/messaris/index.html
Messaris, a leading researcher and theorist in the area of visual literacy, argued for a deepening of visual literacy education beyond a critical analysis of visual texts. He noted that the process of creating visual images contributes more to students’ understanding of the multiplicity of visual information to which they are exposed in a multimedia saturated world. However, despite the exposure to media, Messaris asked, are students indeed “media savvy”? Furthermore, he noted that one cannot assume that the consumption of visual images leads to improvement in a student’s creative abilities.
Then Messaris went on to describe the theoretical implications of the connection between visual creativity and greater cognition as defined by “spatial intelligence” (Gardner, 1983). Spatial intelligence, he noted, is the “process of forming mental representations of three-dimensional reality as a basis for understanding one’s environment and interacting with it effectively. It is a type of intelligence crucial for success in professions such as architecture or carpentry, but it is also a vital ingredient of any person’s everyday physical activities.” Messaris provided examples of how a film editor uses multiple devices for constructing meaning, including zooms, pans, transitions, focus, spatial layout, angle, etc.
Finally, Messaris discussed the implications of visual literacy for education. He noted that students must learn to create visual meaning, not just consume it. Visual connections come easy to experienced viewers. However, the ability to create multimedia creations, combining images, does not come so easily. It is a form of knowledge of a visual grammar that comes through active learning. Through the act of communicating through images, students move beyond seeing media as a “window on reality” to a more enlightened state where they are able to construct new realities through the manipulation of visual conventions. The higher order spatial and analogical thinking skills used in film editing, Messaris argued, carry over to other realms of experience; therefore, learning these skills should “be considered the core objective of an actively oriented visual curriculum” (p. 8).
Lemke, J. (2006). Toward critical multimedia literacy: Technology, Research, and Politics. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 3-14).
In school, students are taught to carefully analyze and deconstruct text. However, most often, the accompanying visual images are ignored. Although multimedia texts outweigh monomodal (writing only) texts, school-based curriculums tend to ignore visual literacy. With the rise of the World Wide Web, “reading” images has become an even more essential skill in decoding multimodal texts, such as web pages.
Lemke argued that we need a “broader definition of literacy itself, one that includes all literate practices, regardless of medium” (p. 4). Texts, he stated, are converging; television programs have websites and so do popular books, movies, and video games. For example, Harry Potter, which began as a print media phenomenon, has moved to websites, movies, television commentaries, and even video games. Similar content, images, and textual themes are distributed over a variety of media.
In light of the complexity of today’s literate activities, Lemke discussed a necessity of conceptual frameworks to help us “cope with the complexity and the novelty of these new multimedia constellations” (p. 5). The field of social semiotics (also known as critical discourse studies, critical media studies, and critical cultural studies) has been developing key concepts. The core idea of semiotics is that all human meaning-making shares a number of features. In multimedia semiotics, these common features form the basis on which the integration across media is possible. The fundamental unit is the meaning-making practice; whether that practice is sued to create or analyze such texts. The model of meaning-making applies across disciplines and across semiotic systems. In fact, we can never make meaning with one mode alone, Lemke argues,
If you speak it, your voice also makes nonlinguistic meaning by its timbre and tone, identifying you as the speaker, telling something about your physical and emotional state, and much else. If you write it, your orthography presents linguistic meaning separately from additional visual meanings (whether in your handwriting or choice of font) (p. 5).
All communication, Lemke noted, is multi-modal communication. He further defined multimodality as “the combination or integration of various sign systems or semiotic resource systems, such as language, gesture, mathematics, music, etc.” (p. 5). The resulting product is a form of gestalt—the whole not only greater than the sum of its parts, but the way in which meaning is represented through a variety of modes effectively is much more important than any one mode.
For example, when we interact with a website, we make trajectories across links that carry us to a wide variety of different genres and different media. We not only surf within sites, we surf between sites; often discovering video, audio, and interactive media that accompany more traditional words and static images. Lemke notes that we are learning “to make meaning along these traversals” that are “relatively free of the constraints of conventional genres.” Additionally, the intertexts create meanings of their own. As Lemke noted, “as our culture increasingly enmeshes us in constellations of tgextual, visual, and other themes that are designed to be distributed across multiple media and activities…these cross-activity and cross-medium connections tend to become coherently structured” (p. 7).
In terms of these multimedia texts, Lemke noted the necessity of key questions that should be answered as we prepare to teach critical multimedia literacy. In most multimodal presentations, different modes are used to represent meaning. One cannot simply deconstruct the verbal message and obtain the whole meaning. Likewise, one cannot simply decode the visual design elements. These modes must function together. Techniques of multimodal analysis must “show how text and images are selectively designed to reinforce one another” (p. 8). No single meaning can be projected across a single modality. The sign from each medium can portray a different message; some of these may create perverse or divergent meanings.
As educators, we must teach students to become specialists in critical multimedia literacy in order for them to make free and democratic choices. To be critical, Lemke notes, is not “just ot be skeptical or to identify the workings of covert interests…it is also to open up alternatives, to provide the analytical basis for the creation of new kinds of meanings” (p. 13). A true critical discourse helps students not only to critique, but to create, author, and produce multimedia texts.
My Commentary: In the new world of Web 2.0—a world where end-users, consumers, teachers, and students are creating content for themselves and peers—self-generated online texts can be created as word documents, audio files, or videos. Free video creation software, such as Windows Movie Maker, allow any individual to create and upload a fully edited, semi-professional movie to the Internet in the comfort of their own home. YouTube is a prime example of this type of opportunity. Amateurs are able to create multimedia and showcase it to an audience; this process taking, at times, no more than an hour. So how will these new multimedia literacies be defined?
Article 10
Hobbs, R. (2006). Multiple visions of multimedia literacy: Emerging areas of synthesis. In M.C. McKenna, L.D. Labbo, R.D. Kieffer, & D. Reinking (Eds.). International handbook of literacy and technology: Volume II (pp. 15-28).
Hobbs, first of all, discussed the impact of “screen activity” on American children and teens, who spend, “an average of eight hours per day using media, including television, videogames, the Internet, newspapers, magazines, films, radio, recorded music, and books” (p. 15; as cited in Kaiser Family Foundation, 2001). In a world constantly bombarded with new and changing literacies, children need to be able to find and critique media messages. Educators traditionally rely on textual and language competencies. However, Hobbs noted that it is also essential for students to learn to use symbol systems (images, music, sound, motion) as a means of expression and communication. Literacy educators are beginning to recognize that they need to teach students how to read and respond to the array of media technologies in order to prepare them for the 21st century.
Educators no longer own the concept of literacy; academic scholars from a wide variety of disciplines (media studies, psychology, cultural anthropology, communications, history, library and information science, literary theory, linguistics, rhetoric, etc.) have become increasingly interested in how individuals make meaning in reading and composing multimedia texts. As a result, educators are using new literacies terminology, such as: visual literacy, media literacy, critical literacy, information literacy, and technology literacy.
Visual literacy, a field based on nearly 100 years of work by interdisciplinary scholars, has long discussed the importance of visual materials and concepts (such as selection, framing, composition, sequence, and aesthetic dimensions of images) in the classroom. Scholars interested in visual literacy have examined how images are interpreted and understood, how images and text interact in meaning-making, how exposure to visual images affects cognitive development, and how semiotic dimensions can be examined. Learning about the visual conventions of images helps give “readers” a way to analyze texts and “creators” some strategies to enhance their own productions. Texts are only representations of reality and key visual grammars exist cross-culturally in the creation of these texts.
Information literacy has been defined by the American Library Association as the abilities an individual needs to recognize when they need information and who to locate, evaluate and use it. In many instances, however, when information literacy is actually taught in the schools, it is defined as a narrow checklist of specific skills; not as a more critical analysis, a multiplicity of comprehension techniques.
Media literacy educators in the United States have been influenced by the work of British, Canadian, and Australian scholars who have discussed engaging educational practices that teach children to analyze mass media and popular culture. Good media literacy pedagogy stresses: the process of inquiry and situated active learning based on the work of Freire and Macedo.
Critical Literacy arose from traditions in semiotics and cultural studies. Meaning making in a critical literacy arena, involves the social, historical, and political contexts combined with the author’s meaning. Critical, as used by scholars, refers to the recognition of oppression and exploitation embedded in texts. Critical literacy scholars “ explore reading within a sociocultural context [and] examine and understand how various texts, including pictures, icons, and electronic messages (as forms of symbolic expression) are used to influence, persuade, and control people” (p. 19).
After defining the new literacies, Hobbs discussed a model for integrating the conceptual tenets of multimedia literacies. School practices must change to incorporate themes of authors and audiences, meanings and messages, and representations and reality. Although some evidence is emerging from research on multimedia literacies, Hobbs noted that most examinations have looked at a small number of students in a single classroom (Alvermann, Moon, & Hagood, 2001; Anderson, 1983). Some have explored whether students learn the appropriate facts through multimedia (Baron, 1985; Kellly, Bunter & Kelly, 1985) or if a video broadcast affects cognitive or critical literacy skills (Vooijs & Van de Voort, 1993). Recently, some case studies have documented educators’ practices in classrooms (Hart & Suss, 2004; Hart, 1998; Hurrell, 2001;Kist, 2000—note to self, also include Leu book here).
Hobbs noted that further research should continue to explore how and why multiliteracies are incorporated into classroom practices. Furthermore, she noted that educators must be responsive to Masterman (1985) who identified a central outcome for media education: the ability to apply skills and strategies learned in the classroom to everyday life. Such work depends on teachers who have the initiative, creativity, imagination and perseverance to enable students “to develop the competencies they need to be citizens of an information age” (p. 25).
Article 11
Royce, T.D. (2007). Intersemiotic complementarity: A framework for multimodal discourse analysis. In T. D. Royce & W. L. Bowcher (Eds.). New directions in the analysis of multimodal discourse. (pp. 63-109).
Royce noted that the theoretical foundation for multimodal discourse analysis is derived from systemic Functional Linguistic (SFL) view of language as a ‘social semiotic’ (Halliday, 1978). Halliday made four central claims about language: it is functional in terms of what it can do or what can be done with it, it is semantic in that it can be used to make meaning, it is contextual, in that meanings are affected by social and cultural situations, and semiotic in that it is a process of selecting from “the total set of options that constitute what can be meant” (Halliday, 1978, 1985, p. 53). Halliday also identified three types of meanings, which are “metafunctions” that operate simultaneously in the semantics of every language: the ideational metafunction (responsible for “the representation of experience”); the interpersonal metafunction (meaning as a form of action); and the textual metafunction (maintaining relevance to the context). Reading or viewing involves the simultaneous interplay of three elements, which correlate with the metafunctions: represented participants (elements that are actually present in the visual), interactive participants (participants interacting with each other in the act of reading—graphic designer and reader), and the visuals’ coherent structural elements (compositional features such as element of design or layout).
Royce provides a detailed analysis based the intertextuality of these factors on page 68 and 69. The interpretation deals with how visual and verbal modes interact “intersemiotically” with the: identification of participants, represented processes or activities, the circumstances, and the attributes. Teach of these aspects can be discussed in terms of Visual Message Elements. Royce further explained that the same ways that metafunction concepts can be applied to visual modes of communication, so can the analysis of cohension in text by Halliday and Hasan (1985) be used to “explicate the ideational cohesive relations between the modes in a multimodal text” For this purpose, Royce used the following sense relations: Repetition (R) for the repetition of experiential meaning; Synonymy (S) for a similar meaning; Antonymy (A) for an oppositie meaning; Hyphonymy (H) for a general class of something and subclasses; and Meronymy (M) for reference to the whole of something and its parts; and Collocation (C) for words that tend to occur in various subjects (Halliday, 1985).
Furthermore, the examination of the intersemiotic interpersonal features of a multimodal text looks at relations between the visual and viewer and how they are represented (Kress & van Leeuwen, 1996). This can be very important in terms of speech functions as distinguished by Halliday (1985): offer, command, statement, and question. Visual type can also be important, as can the level of involvement by a viewer which is recognized by visual angle or point of view. Power relations between viewers and the represented participants are encoded in the angle between them. The degree of social distance is realized by the size of frame: close up, medium, and long shot. These different kinds of shots parallel the distances people use when they talk face to face (Kress and van Leeuwen, 1990). Relationships can occur when the interpersonal meanings in both visual and verbal modes co-occurring on the same page, related through reinforcement of address and through intersemiotic attitudinal congruence and attitudinal dissonance (modality) relations. Furthermore, relationships can also occur when the compositional meanings are integrated by design features, such as: information value, salience, visual framing, visual synonymy, and potential reading paths.
Thursday, May 31, 2007
Some fractured notes from meeting on 5/31
Notes from 5/31
Kress seems to construct dichotomies. We need to flesh out the planet between his poles.
We're not looking for social commentary.
Look at what ideas these kids come with, how they turn them into texts, and how they turn them into videos. What are their literate activities? ie. shots, cropping, sequencing.
Comprehensability, 6 Traits
Intention precedes production of the message. (Derida)
Psychological, mechanical relationships will be the same as adults.
They are the mediator between intention and production. What they do is what we are studying.
Reading and writing require the same prerequisite skills.
Viewing and producing video do not require the same skills. You can watch a movie with no prerequisites, although your level of background information and sophistication will affect what you are able to get out of the experience. Producing video requires special equipment and expertise. Only very recently have both the means of production and access to distribution become widely available.
Kress seems to construct dichotomies. We need to flesh out the planet between his poles.
We're not looking for social commentary.
Look at what ideas these kids come with, how they turn them into texts, and how they turn them into videos. What are their literate activities? ie. shots, cropping, sequencing.
Comprehensability, 6 Traits
Intention precedes production of the message. (Derida)
Psychological, mechanical relationships will be the same as adults.
They are the mediator between intention and production. What they do is what we are studying.
Reading and writing require the same prerequisite skills.
Viewing and producing video do not require the same skills. You can watch a movie with no prerequisites, although your level of background information and sophistication will affect what you are able to get out of the experience. Producing video requires special equipment and expertise. Only very recently have both the means of production and access to distribution become widely available.
Thursday, May 24, 2007
Moving Images Count Too - JW
Throughout Chapter 3, Kress seems to disregard the temporality of video. Meaning in image is related exclusively to the spatial and temporal significance belongs only to writing. The temporal is central to meaning-making in the moving image. Kress lumps all kinds of images together.
The discussion of the term "literacy" is interesting. Kress argues against the wide application of the term. "Something that has come to mean everything, is likely not to mean very much at all." (p. 22) I think it's wise to apply specific language to different circumstances, but I doubt that this word can be reclaimed in any meaningful way. "Literacy" is too widely used and means too many different things.
JW
The discussion of the term "literacy" is interesting. Kress argues against the wide application of the term. "Something that has come to mean everything, is likely not to mean very much at all." (p. 22) I think it's wise to apply specific language to different circumstances, but I doubt that this word can be reclaimed in any meaningful way. "Literacy" is too widely used and means too many different things.
JW
Monday, May 21, 2007
Defining the Project
We met today - James King, Debbie, James Welsh - to discuss the scope of the work that we will be doing this summer with the camp.
We will try to get a picture of the resources that the kids have at home. We assume that they are awash in media, but we'd like to have some certainty about that. We will give them a survey of some kind to show that they have access to the internet, TV, gaming, etc.
We will be examining their products and interviewing them to find out about the source material for the products. Debbie will be a participant-observer, mostly observing.
Debbie will be collecting:
Ambient data
Directed interview questions to test hypothesis
Debbie's metacognitive data about what's going on
I don't have complete notes about the rest of this meeting. Please add!
We will try to get a picture of the resources that the kids have at home. We assume that they are awash in media, but we'd like to have some certainty about that. We will give them a survey of some kind to show that they have access to the internet, TV, gaming, etc.
We will be examining their products and interviewing them to find out about the source material for the products. Debbie will be a participant-observer, mostly observing.
Debbie will be collecting:
Ambient data
Directed interview questions to test hypothesis
Debbie's metacognitive data about what's going on
I don't have complete notes about the rest of this meeting. Please add!
Sunday, May 20, 2007
Chapters 1-3 deb
"The world told is a different world to the world shown"--(p.1)
I think it is easier to lure individuals into false beliefs through the written word. People tend to believe words that are written by "experts" as "gospel truth". Images are usually defined as up for wide interpretation. I think that through a program of critical thinking about media texts--that include both images and text--that we can teach our students to be more critical about their interpretation of both the words and the pictures. Critical media literacy strategies can enhance both image and text interpretation.
While humans have always "read" the world through imagery, traditional schooling has rarely focused on the power of the image. Only privileged individuals who chose to study journalism, film, graphic design, or other visual media were exposed to a real literacy of the techniques and strategies. Now with the advent of simple editing techniques for magazine like quality in computer applications, movie editing software, and simple web-page creation (like this blog) individuals can create their own multimodal "texts" that include both image and word.
Here are some Kress thoughts I find particularly significant in chapter one.
Page 1--speech or writing as a narrative genre. Writing-->logic of time-->logic of sequence in its elements of time-->temporally governed arrangements.
Page 1--image as display genre. Image--logic of space--logic of simultaneity of visual elements--spatially organized arrangements (center as central, above as superior)--recast as spatial relations.
Page 3-4--Reading Paths (this concept figures widely through Kress's work and is cited by many in the hypermedia world). Determined by the maker or the reader...or a combination of both. By creating salient elements, the maker guides the reader towarad a path. However, for the reader, reading images "out of order" is easy.
Page 3--Kress noted Reading Paths as one effect of new media. Here are some others...1) use a multiplicity of modes (image still, moving,music, sound effects) 2) interactivity--interpersonal (write back)=social power, hypertextual =semiotic power.
Page 8--in the era of dominence of writing the image was subject to the logic of writing. Now in an era of hte dominance of the screen, writing appears as subject to the logic of images. (think of captions for images--increasingly sophisticated picture books employ images outside of the text to tell stories within/outside of stories).
CHAPTER 3
Font, embolden, italicise, bullet points (bullets, quick, fired at us)
I think it is easier to lure individuals into false beliefs through the written word. People tend to believe words that are written by "experts" as "gospel truth". Images are usually defined as up for wide interpretation. I think that through a program of critical thinking about media texts--that include both images and text--that we can teach our students to be more critical about their interpretation of both the words and the pictures. Critical media literacy strategies can enhance both image and text interpretation.
While humans have always "read" the world through imagery, traditional schooling has rarely focused on the power of the image. Only privileged individuals who chose to study journalism, film, graphic design, or other visual media were exposed to a real literacy of the techniques and strategies. Now with the advent of simple editing techniques for magazine like quality in computer applications, movie editing software, and simple web-page creation (like this blog) individuals can create their own multimodal "texts" that include both image and word.
Here are some Kress thoughts I find particularly significant in chapter one.
Page 1--speech or writing as a narrative genre. Writing-->logic of time-->logic of sequence in its elements of time-->temporally governed arrangements.
Page 1--image as display genre. Image--logic of space--logic of simultaneity of visual elements--spatially organized arrangements (center as central, above as superior)--recast as spatial relations.
Page 3-4--Reading Paths (this concept figures widely through Kress's work and is cited by many in the hypermedia world). Determined by the maker or the reader...or a combination of both. By creating salient elements, the maker guides the reader towarad a path. However, for the reader, reading images "out of order" is easy.
Page 3--Kress noted Reading Paths as one effect of new media. Here are some others...1) use a multiplicity of modes (image still, moving,music, sound effects) 2) interactivity--interpersonal (write back)=social power, hypertextual =semiotic power.
Page 8--in the era of dominence of writing the image was subject to the logic of writing. Now in an era of hte dominance of the screen, writing appears as subject to the logic of images. (think of captions for images--increasingly sophisticated picture books employ images outside of the text to tell stories within/outside of stories).
CHAPTER 3
Font, embolden, italicise, bullet points (bullets, quick, fired at us)
Wednesday, May 16, 2007
Ch. 2 Notes
The book, which I just realized hasn't been identified yet in this blog, is Gunther Kress' Literacy in the New Media Age.
Kress places us at the intersection of four broad changes: social, economic, political, and technological. I hope that later in the text he elaborates on how he sees these four changes. It seems that we're always at that intersection.
Something that Kress says here about writing becoming increasingly display-oriented got me thinking about the children's drawings. Before kids formally learn to write, they draw and color. Their pictoral representations grow into "alphabetic" writing. In the past, those kids move from making pictures to making words. In the new media age that Kress is describing, do those kids stay with pictures? Or do they move from pictures to words, then back to pictures? What does increasingly image-laden text mean to kids who are first learning to write? It's possible for a child to learn how to make a movie on a computer before learning to write, or at least simulatenously. Does that happen anywhere? It must. So what does that look like and how does it affect alphabetic writing?
I would also argue that writing has long been display-oriented. The printed word includes many design elements, including the geography of the page, font design, binding, color, and the use of organizational guides.
Here's a quote that I like,
"We need to be aware however, that on the screen writing may appear with the modes of music, of colour, of (moving) image, of speech, of soundtrack. All these bear meaning, and are part of one message. The mode of writing is one part of that message, and so is partial in relation to the message overall."
Very true.
Also true is Kress' admonition that change is ever-present and that we "can neither pretend that there is stability nor demand it." There is no going back. There is no "back" to go back to.
The questions he poses on the second paragraph on page 12 seem to relate strongly to one of those wonderful conversations we had this Spring in the round room - going fast vs. going deep. Fast literacies let you jump easily from topic to topic, to cover a lot of ground and find subtle connections between seemingly disparate ideas but they don't seem to encourage exploring a single topic in depth. With ICT, there may be vast resources easily available to allow you to "go deep", but the structure doesn't seem to encourage it.
Kress also notes what we have experienced all too frequently: working with images costs time and storage space. So... very... true. There's also a little warning there about over-borrowing from other fields: "Extending one theory too far, into a domain for which it was never meant, does no one a service."
James
Kress places us at the intersection of four broad changes: social, economic, political, and technological. I hope that later in the text he elaborates on how he sees these four changes. It seems that we're always at that intersection.
Something that Kress says here about writing becoming increasingly display-oriented got me thinking about the children's drawings. Before kids formally learn to write, they draw and color. Their pictoral representations grow into "alphabetic" writing. In the past, those kids move from making pictures to making words. In the new media age that Kress is describing, do those kids stay with pictures? Or do they move from pictures to words, then back to pictures? What does increasingly image-laden text mean to kids who are first learning to write? It's possible for a child to learn how to make a movie on a computer before learning to write, or at least simulatenously. Does that happen anywhere? It must. So what does that look like and how does it affect alphabetic writing?
I would also argue that writing has long been display-oriented. The printed word includes many design elements, including the geography of the page, font design, binding, color, and the use of organizational guides.
Here's a quote that I like,
"We need to be aware however, that on the screen writing may appear with the modes of music, of colour, of (moving) image, of speech, of soundtrack. All these bear meaning, and are part of one message. The mode of writing is one part of that message, and so is partial in relation to the message overall."
Very true.
Also true is Kress' admonition that change is ever-present and that we "can neither pretend that there is stability nor demand it." There is no going back. There is no "back" to go back to.
The questions he poses on the second paragraph on page 12 seem to relate strongly to one of those wonderful conversations we had this Spring in the round room - going fast vs. going deep. Fast literacies let you jump easily from topic to topic, to cover a lot of ground and find subtle connections between seemingly disparate ideas but they don't seem to encourage exploring a single topic in depth. With ICT, there may be vast resources easily available to allow you to "go deep", but the structure doesn't seem to encourage it.
Kress also notes what we have experienced all too frequently: working with images costs time and storage space. So... very... true. There's also a little warning there about over-borrowing from other fields: "Extending one theory too far, into a domain for which it was never meant, does no one a service."
James
Sunday, May 13, 2007
Reflections on Chapter 1 - JW
Reflections on Chapter 1
In the second paragraph, Kress makes these predictions: “Language-as-speech will remain the major mode of communication; language-as-writing will increasingly be displaced by image in many domains of public communication, though writing will remain the preferred mode of the political and cultural elites.” This means that access to higher spheres of influence will still be governed by mastery of these elite Discourses. Screen and image may be the Discourses of control, but writing and books will remain the Discourses of Power. Sure, you can make your own video and distribute it to the world, but the people who are really in control will still be the ones who have access to law and government and business. Citizens who are fluent in new media literacies but can’t read and write in the mode of dominant Discourses are ready to be controlled and victimized.
“ ‘The world narrated’ is a different world to ‘the world depicted and displayed’.” This is very true, but I disagree with some of the attributes that he identifies. Kress states that while text is most strongly temporal (one thing follows another), image is spatial. That’s true, but image is also temporal. If we are discussing ‘image’ here to include video, and I think we should, then one can’t discount the temporal aspect of image. The order in which elements of meaning are presented in a video have a large impact on meaning. You could argue that video is more strictly governed by the temporal than writing. Partially due to technological limits, it’s more difficult to skip around in video than in text. Written text allows you to set your pace, to reread as necessary, to scan and skim, and to skip to the sections of text that are most pertinent to you. With video, the viewer has less control of temporal elements. With images, I think it’s easier to embed messages that are consumed uncritically because it looks like unshaped reality.
By page 4, Kress is discussing further differences between writing and image. He says that the written word is “vacuous” and without meaning and the creative act is to associate those words with meaning. By contrast, he says that images are “filled with meaning” and the creative act is in arranging those meaning elements. I can’t agree with this completely. Images are not the things they depict. An image is just another representation of a thing. An image is closer to reality than a word (and subtext is easier to disguise in an image), but the image is still a container for meaning assigned from the outside. An image does contain literal level significance for the thing represented, but the really interesting (and powerful) part is not at the literal level.
There’s some great stuff on page 6, where Kress is describing how authorship changes in an environment where everyone can be an author. Call it the Wikipedia effect. Authorship used to imply authority. Very few could get published and the publishing process weeded out all but (supposedly) the most authoritative. Now, anyone can get published, so authorship, in and of itself, doesn’t mean squat. This is why we teach kids to be critical readers and why online critical literacy is so important. Anyone can get published. Citizens need to know how to weed through the ideas for themselves now.
Kress continues to discuss the changing role of authors and talks about the dissolution of the myth of original authorship. I know I should accept this. I know it’s probably true. But I’m not ready to let go of that idea of the author (in whatever medium) as the creative originator. I feel that writing is more than just rearranging and regurgitating the ideas of others. I’ve got to think more about that one.
Kress closes with an objection, and this is where I found myself strongly diverging from his opinions. He says that books today are not what books used to be, that textbooks are not what textbooks used to be. I’m reading the tone in this section as bitter nostalgia. He seems to say, “in my day, books were books - not like this junk you kids have today.” Is it true that textbooks, as a whole, were better thirty-five years ago? Actually, he doesn’t say that they were better. He says textbooks were “expositions of coherent ‘bodies of knowledge’ presented in the mode of writing” and that now a textbook is “a collection of ‘worksheets’, organised around the issues of the curriculum, and put between more or less solid covers.” He laments the loss of “that sense of a reader engaging with and absorbing a coherent exposition of a body of knowledge, authoritatively presented” and says that it has been replaced with activities that place students in action around a topic to learn by doing.
Two questions: Have textbooks changed in these ways in the last four decades? Is it a bad thing if they have?
I think I’d rather have kids learning-by-doing than “absorbing” knowledge from experts.
I liked Kress’ point about the age of the writers of websites. Yes, the internet is mostly text now, but most of the current internet is maintained by people who grew up with text as writing and the predominance of books. This may change as we die off.
As he closes the first chapter, Kress presents the ideas with which I most strongly disagree. He says that image has coexisted with writing in the past, but that image was subordinate to writing. “In simple terms, it fitted in how, where and when the logic of the written text and of the page suggested. In the era of the dominance of the screen, writing appears on the screen subject to the logic of the image.” This seems to me to be the wrong approach.
Both text and image serve meaning. An effective author uses the best tools available for a specific purpose. Sometimes text predominates and sometimes image does, but an effective author applies the strengths of the tools at hand to communicate a specific message to a specific audience.
Writing is an act of making meaning. Text and images should be employed to serve the purpose and message of the author. The “logic of text” shouldn’t preempt the message; nor should the “logic of screen”. When creating a web page, an author should start with message, then use whatever elements - text, image, sound, video - best suit the logic of the message. It’s true that there is a difference between “logic of text” and “logic of screen” and that the difference defines the possibilities of expression in each medium.
I don’t know. Now I’m starting to contradict myself. We need to seek answers to the questions that Kress puts forth in his closing paragraph, but I don’t have an inkling what those answers might be. I am put off by what sounds to me like an alarmist tone when discussing how literacy is changing. Change isn’t good or bad, it’s just inevitable. Living things change. This includes dynamic systems that act like living things, in this case language and culture. Does that mean we’re going to hell in a handbasket? It depends largely on what you mean by “we”. I think that a lot of people get up tight about changes in culture because they want the future to look like their past. It won’t. People will change the way institutions work, they’ll change the language to suit them, they’ll change laws and governments and anything else that they want to. We can’t impose ourselves on the future. We should discuss, we should debate, we should look for answers. But the answers are not a return to things past.
James
In the second paragraph, Kress makes these predictions: “Language-as-speech will remain the major mode of communication; language-as-writing will increasingly be displaced by image in many domains of public communication, though writing will remain the preferred mode of the political and cultural elites.” This means that access to higher spheres of influence will still be governed by mastery of these elite Discourses. Screen and image may be the Discourses of control, but writing and books will remain the Discourses of Power. Sure, you can make your own video and distribute it to the world, but the people who are really in control will still be the ones who have access to law and government and business. Citizens who are fluent in new media literacies but can’t read and write in the mode of dominant Discourses are ready to be controlled and victimized.
“ ‘The world narrated’ is a different world to ‘the world depicted and displayed’.” This is very true, but I disagree with some of the attributes that he identifies. Kress states that while text is most strongly temporal (one thing follows another), image is spatial. That’s true, but image is also temporal. If we are discussing ‘image’ here to include video, and I think we should, then one can’t discount the temporal aspect of image. The order in which elements of meaning are presented in a video have a large impact on meaning. You could argue that video is more strictly governed by the temporal than writing. Partially due to technological limits, it’s more difficult to skip around in video than in text. Written text allows you to set your pace, to reread as necessary, to scan and skim, and to skip to the sections of text that are most pertinent to you. With video, the viewer has less control of temporal elements. With images, I think it’s easier to embed messages that are consumed uncritically because it looks like unshaped reality.
By page 4, Kress is discussing further differences between writing and image. He says that the written word is “vacuous” and without meaning and the creative act is to associate those words with meaning. By contrast, he says that images are “filled with meaning” and the creative act is in arranging those meaning elements. I can’t agree with this completely. Images are not the things they depict. An image is just another representation of a thing. An image is closer to reality than a word (and subtext is easier to disguise in an image), but the image is still a container for meaning assigned from the outside. An image does contain literal level significance for the thing represented, but the really interesting (and powerful) part is not at the literal level.
There’s some great stuff on page 6, where Kress is describing how authorship changes in an environment where everyone can be an author. Call it the Wikipedia effect. Authorship used to imply authority. Very few could get published and the publishing process weeded out all but (supposedly) the most authoritative. Now, anyone can get published, so authorship, in and of itself, doesn’t mean squat. This is why we teach kids to be critical readers and why online critical literacy is so important. Anyone can get published. Citizens need to know how to weed through the ideas for themselves now.
Kress continues to discuss the changing role of authors and talks about the dissolution of the myth of original authorship. I know I should accept this. I know it’s probably true. But I’m not ready to let go of that idea of the author (in whatever medium) as the creative originator. I feel that writing is more than just rearranging and regurgitating the ideas of others. I’ve got to think more about that one.
Kress closes with an objection, and this is where I found myself strongly diverging from his opinions. He says that books today are not what books used to be, that textbooks are not what textbooks used to be. I’m reading the tone in this section as bitter nostalgia. He seems to say, “in my day, books were books - not like this junk you kids have today.” Is it true that textbooks, as a whole, were better thirty-five years ago? Actually, he doesn’t say that they were better. He says textbooks were “expositions of coherent ‘bodies of knowledge’ presented in the mode of writing” and that now a textbook is “a collection of ‘worksheets’, organised around the issues of the curriculum, and put between more or less solid covers.” He laments the loss of “that sense of a reader engaging with and absorbing a coherent exposition of a body of knowledge, authoritatively presented” and says that it has been replaced with activities that place students in action around a topic to learn by doing.
Two questions: Have textbooks changed in these ways in the last four decades? Is it a bad thing if they have?
I think I’d rather have kids learning-by-doing than “absorbing” knowledge from experts.
I liked Kress’ point about the age of the writers of websites. Yes, the internet is mostly text now, but most of the current internet is maintained by people who grew up with text as writing and the predominance of books. This may change as we die off.
As he closes the first chapter, Kress presents the ideas with which I most strongly disagree. He says that image has coexisted with writing in the past, but that image was subordinate to writing. “In simple terms, it fitted in how, where and when the logic of the written text and of the page suggested. In the era of the dominance of the screen, writing appears on the screen subject to the logic of the image.” This seems to me to be the wrong approach.
Both text and image serve meaning. An effective author uses the best tools available for a specific purpose. Sometimes text predominates and sometimes image does, but an effective author applies the strengths of the tools at hand to communicate a specific message to a specific audience.
Writing is an act of making meaning. Text and images should be employed to serve the purpose and message of the author. The “logic of text” shouldn’t preempt the message; nor should the “logic of screen”. When creating a web page, an author should start with message, then use whatever elements - text, image, sound, video - best suit the logic of the message. It’s true that there is a difference between “logic of text” and “logic of screen” and that the difference defines the possibilities of expression in each medium.
I don’t know. Now I’m starting to contradict myself. We need to seek answers to the questions that Kress puts forth in his closing paragraph, but I don’t have an inkling what those answers might be. I am put off by what sounds to me like an alarmist tone when discussing how literacy is changing. Change isn’t good or bad, it’s just inevitable. Living things change. This includes dynamic systems that act like living things, in this case language and culture. Does that mean we’re going to hell in a handbasket? It depends largely on what you mean by “we”. I think that a lot of people get up tight about changes in culture because they want the future to look like their past. It won’t. People will change the way institutions work, they’ll change the language to suit them, they’ll change laws and governments and anything else that they want to. We can’t impose ourselves on the future. We should discuss, we should debate, we should look for answers. But the answers are not a return to things past.
James
Subscribe to:
Posts (Atom)