A workshop at the VisWeek 2012 Conference on October 14(/15), 2012 in Seattle, WA, USA.
Agenda and Papers
1. Evaluation at Design (InfoVis): How do we learn from users at the design stage to correct mistakes before building a full prototype?
Experiences in Involving Analysts in Visualisation Design (Position Paper)
An Integrated Approach for Evaluating the Visualization of Intensional and Extensional Levels of Ontologies (Research Paper)
2. Evaluation at Design (SciVis): How do we learn from users at the design stage to correct mistakes before building a full prototype?
Which Visualizations Work, for What Purpose, for Which Audiences? Visualization of Terrestrial and Aquatic Systems (VISTAS) Project – A Preliminary Report (Position Paper)
Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool (Position Paper)
Bret Jackson, Dane Coffey, Lauren Thorson, David Schroeder, Arin Ellingson, David Nuckley, Daniel Keefe
3. Cognition and evaluation (new metrics / measures): How can we measure user cognition?
Evaluating Scientific Visualization Using Cognitive Measures (Position Paper)
The ICD3 Model: Individual Cognitive Differences in Three Dimensions (Position Paper)
Evan Peck, Beste Yuksel, Lane Harrison, Alvitta Ottley, Remco Chang
Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems (Position Paper)
4. Evaluating visualizations: How can we measure visualization?
Spatial Autocorrelation-Based Information Visualization Evaluation (Research Paper)
The Importance of Tracing Data Through the Visualization Pipeline (Position Paper)
5. Why evaluate?: What are the goals and motivations of evaluations? How should these be conveyed in reporting evaluation?
Stop The Evaluation Arms Race! A Call to Evaluate Visualization Evaluation (Position Paper)
The Four-Level Nested Model Revisited: Blocks and Guidelines (Research Paper)
6. New evaluation framework: What can we learn from patterns and templates and apply to visualization evaluation?
Patterns for Visualization Evaluation (Research Paper)
A Reflection on Seven Years of the VAST Challenge (Research Paper)
7. Novel methods
Reading, Sorting, Marking, Shuffling: Mental Model Formation through Information Foraging (Position Paper)
Evaluating Analytic Performance (Position Paper)
8. Improving existing methods
How to Filter out Random Clickers in a Crowdsourcing-Based Study? (Research Paper)
Questionnaires for Evaluation in Information Visualization (Position Paper)
Methodologies for the Analysis of Usage Patterns in Information Visualization (Position Paper)