FP-Jay Moon and Dennis Jeong
- Dennis Jeong
- Jay Moon
Heavily based on XKCD's character maps, we're interested in visualizing character movements in movies by parsing screenplays and determining where their characters are at any given point in time. We chose screenplays because they're well structured, denoting the start of scenes with prominent slug lines (like "INT. HOUSE – DAY") and the characters in the scene in all caps, and some are even written in a Markdown-esque format called Fountain. The challenge is automatically creating visualizations of people who are in the same place at the same time. Specifically, we must figure out a way to group characters to maximize legibility The ultimate goal is to create a method of generating a two-dimensional timeline, where instead of graphing events, we can show how important people move and interact over time.
Due to discovering that our proposal had been completed already, we have chosen to pursue a different project:
We're interested in understanding whether political preconceptions and bias can affect people's perceptions of different data encodings. By exposing study participants to politically controversial data, such as the performance of the economy, the effect of gun control, or the result of war policies, we will determine whether the interpretation of a visualization changes. We want to measure this affect between multiple encodings, such as position, length, slope, color, and area, that we have discussed throughout class. We chose to study this effect because we want to know whether different people see different results in the same visualizations.
Project Progress Presentation
In “See What You Want to See: Motivational Influences on Visual Perception”, Balcetis and Dunning conducted a series of experiments to demonstrate how a person’s wishes and preferences interpreted ambiguous figures, such as this horseseal. During this experiment, the researchers motivated the participants with a “random” process that would give them orange juice if there were more land animals, and a unpleasant-looking health drink if there were more sea animals. However, the tie was always broken with the horseseal, therefore measuring the interpretation of the ambiguous image. Through multiple variations of the experiment, it was demonstrated that this effect persisted even after the goal was flipped.
Opening the Political Mind? The effects of self-affirmation and graphical information on factual misperceptions
“Opening the Political Mind? The effects of self-affirmation and graphical information on factual misperceptions” is an unpublished paper about how graphical information can be used to correct politically driven misperceptions. The experiments demonstrated that visualizations can be effective tools in “increasing the difficulty of resisting information and maintaining an incorrect belief.” This paper provides a good guideline for how to choose politically relevant data visualizations.
“Visualization Rhetoric: Framing Effects in Narrative Visualization” surveys a variety of rhetorical techniques used in visualization and provides a taxonomy for these techniques, breaking down “editorial layers” in which meaning can be imposed. This paper will be a good background for formalizing and anticipating the effects of the different visualizations we present to our participants.
“Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design” by Heer and Bostock explains how to use crowdsourcing platforms such as Mechanical Turk for evaluation purposes; while the specific hypothesis that they were testing is irrelevant to ours, the general methodology will be extremely important to our experiment. Due to our experiment’s dependence on political ideology, it will be very difficult to obtain a representative sample in the California area. Therefore, we will have to rely heavily on crowdsourced and online surveys for our data collection.
Saturday, November 19
Draft experiment plan and have it reviewed by TAs/professors. (Joint)
Sunday, November 20
Upload experiment to mTurk to collect initial data (N=50). (Joint)
Saturday, November 26
Collect and analyze initial results, revise experiment as needed. (Joint)
Monday, November 28
Upload final experiment to mTurk. (Joint)
Friday, December 2
Collect final results, start analysis. (Joint)
Tuesday, December 6
Finish poster, submit printing order. (Joint)
Sunday, December 11
Finish paper. (Joint)