Schedule
08:30-08:50
Introduction
08:50-09:30
Keynote: Bad Stats are Miscommunicated Stats
Pierre Dragicevic
more details
09:30-10:10
Paper Session - Rethinking Evaluation Level: Abstracted Task vs In Situ Evaluation
(Session Chair: Heidi Lam)
Visualizing Dimensionally-Reduced Data: Interviews with Analysts and a Characterization of Task Sequences
Matthew Brehmer,
Michael Sedlmair,
Stephen Ingram,
Tamara Munzner
User Tasks for Evaluation: Untangling the Terminology Throughout Visualization Design and Development
Alexander Rind,
Wolfgang Aigner,
Markus Wagner,
Silvia Miksch,
Tim Lammarsch
Considerations for Characterizing Domain Problems
Kirsten Winters,
Denise Lach,
Judith Cushing
Navigating Reductionism and Holism in Evaluation
Michael Correll,
Eric Alexander,
Danielle Albers,
Alper Sarikaya,
Michael Gleicher
BREAK 10:10 - 10:30
10:30-11:15
Paper Session - Cognitive Processes and Interaction
(Session Chair: Petra Isenberg)
Evaluation Methodology for Comparing Memory and Communication of Analytic Processes in Visual Analytics
Eric Ragan,
John Goodall
Just the other side of the coin? From error- to insight-analysis
Michael Smuc
Evaluating User Behavior and Strategy During Visual Exploration
Khairi Reda,
Andrew Johnson,
Jason Leigh,
Michael Papka
Value-Driven Evaluation of Visualizations
John Stasko
11:15-11:50
Paper Session - New Techniques I: Eye tracking
(Session Chair: Tobias Isenberg)
Benchmark Data for Evaluating Visualization and Analysis Techniques for Eye Tracking for Video Stimuli
Kuno Kurzhals,
Daniel Weiskopf
Evaluating Visual Analytics with Eye Tracking
Kuno Kurzhals,
Brian Fisher,
Daniel Weiskopf,
Michael Burch
Towards Analyzing Eye Tracking Data for Evaluating Interactive Visualization Systems
Tanja Blascheck,
Thomas Ertl
11:50-12:20
Paper Session - New Techniques II: Crowdsourcing
(Session Chair: Michael Sedlmair)
Gamification as a Paradigm for the Evaluation of Visual Analytics Systems
Nafees Ahmed,
Klaus Mueller
Crowdster: Enabling Social Navigation in Web-based Visualization using Crowdsourced Evaluation
Yuet Ling Wong,
Niklas Elmqvist
Repeated Measures Design in Crowdsourcing-based Experiments for Visualization
Alfie Abdul-Rahman,
Karl Proctor,
Brian Duffy,
Min Chen
LUNCH: 12:20 - 14:00
14:00-14:55
Paper Session - Adopting methods from other fields
(Session Chair: Heidi Lam)
Evaluation of information visualization techniques: analysing user experience with reaction cards
Tanja Merčun
Toward Visualization-Specific Heuristic Evaluation
Alvin Tarrell,
Ann Fruhling,
Rita Borgo,
Jean Scholtz,
Georges Grinstein,
Camilla Forsell
Experiences and Challenges with Evaluation Methods in Practice: A Case Study
Simone Kriglstein,
Margit Pohl,
Nikolaus Suchy,
Theresia Gschwandtner,
Silvia Miksch,
Johannes Gärtner
More Bang for Your Research Buck: Toward Recommender Systems for Visual Analytics
Leslie Blaha,
Dustin Arendt,
Fairul Mohd-Zaid
Sanity Check for Class-coloring-based Evaluation of Dimension Reduction techniques
Michaël Aupetit
14:55-15:35
Paper Session - Experience Reports
(Session Chair: Michael Sedlmair)
Oopsy-Daisy: Failure Stories in Quantitative Evaluation Studies for Visualizations
Sung-Hee Kim,
Ji Soo Yi,
Niklas Elmqvist
Pre-Design Empiricism for Information Visualization: Scenarios, Methods, and Challenges
Matthew Brehmer,
Sheelagh Carpendale,
Bongshin Lee,
Melanie Tory
Field Experiment Methodology for Pair Analytics
Linda Kaastra,
Brian Fisher
Utility Evaluation of Models
Jean Scholtz,
Oriana Love,
Mark Whiting,
Duncan Hodges,
Lia Emanuel,
Danae Stanton Fraser
15:35-15:40
Brief introduction to break-out groups
BREAK: 15:40-16:15
16:15-17:50
Break-out group discussions