A workshop at the VisWeek 2012 Conference on October 14(/15), 2012 in Seattle, WA, USA.

Agenda and Papers

Morning Sessions

1. Evaluation at Design (InfoVis): How do we learn from users at the design stage to correct mistakes before building a full prototype?

Experiences in Involving Analysts in Visualisation Design (Position Paper) Aidan Slingsby, Jason Dykes An Integrated Approach for Evaluating the Visualization of Intensional and Extensional Levels of Ontologies (Research Paper) Isabel Silva, Carla Dal Sasso Freitas, Giuseppe Santucci

2. Evaluation at Design (SciVis): How do we learn from users at the design stage to correct mistakes before building a full prototype?

Which Visualizations Work, for What Purpose, for Which Audiences? Visualization of Terrestrial and Aquatic Systems (VISTAS) Project – A Preliminary Report (Position Paper) Judith Cushing, Kirsten Winters, Denise Lach, Michael Bailey, Susan Stafford, Evan Hayduk, Jerilyn Walley, Christoph Thomas Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool (Position Paper) Bret Jackson, Dane Coffey, Lauren Thorson, David Schroeder, Arin Ellingson, David Nuckley, Daniel Keefe

3. Cognition and evaluation (new metrics / measures): How can we measure user cognition?

Evaluating Scientific Visualization Using Cognitive Measures (Position Paper) Erik Anderson The ICD3 Model: Individual Cognitive Differences in Three Dimensions (Position Paper) Evan Peck, Beste Yuksel, Lane Harrison, Alvitta Ottley, Remco Chang Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems (Position Paper) Alex Endert, Chris North

4. Evaluating visualizations: How can we measure visualization?

Spatial Autocorrelation-Based Information Visualization Evaluation (Research Paper) Joseph Cottam, Andrew Lumsdaine The Importance of Tracing Data Through the Visualization Pipeline (Position Paper) Aritra Dasgupta, Robert Kosara Afternoon Sessions

5. Why evaluate?: What are the goals and motivations of evaluations? How should these be conveyed in reporting evaluation?

Stop The Evaluation Arms Race! A Call to Evaluate Visualization Evaluation (Position Paper) Michael Gleicher The Four-Level Nested Model Revisited: Blocks and Guidelines (Research Paper) Miriah Meyer, Michael Sedlmair, Tamara Munzner

6. New evaluation framework: What can we learn from patterns and templates and apply to visualization evaluation?

Patterns for Visualization Evaluation (Research Paper) Niklas Elmqvist, Ji Soo Yi A Reflection on Seven Years of the VAST Challenge (Research Paper) Jean Scholtz, Mark Whiting, Catherine Plaisant, Georges Grinstein

7. Novel methods

Reading, Sorting, Marking, Shuffling: Mental Model Formation through Information Foraging (Position Paper) Laura McNamara, Nancy Orlando-Gay Evaluating Analytic Performance (Position Paper) Brian Fisher, Linda Kaastra, Richard Arias-Hernández

8. Improving existing methods

How to Filter out Random Clickers in a Crowdsourcing-Based Study? (Research Paper) Sung-Hee Kim, Hyokun Yun, Ji Soo Yi Questionnaires for Evaluation in Information Visualization (Position Paper) Camilla Forsell, Matthew Cooper Methodologies for the Analysis of Usage Patterns in Information Visualization (Position Paper) Margit Pohl