Kasper Hornbæk Human-computer interaction researcher @ Uni. Copenhagen
Titel: Conceptual and Practical Challenges in InfoViz Evaluations
Abstract: Research in information visualization (InfoViz) has developed considerably during the last 25 years. In particular, the field is now informed by a substantial and growing literature on evaluations of visualizations. To keep advancing InfoViz, I believe we need to address two limitations of our evaluations. On the one hand, few empirical studies are motivated by theory or are comparing equally plausible hypotheses. Mostly, the InfoViz literature proposes radical innovations (in the terms of William Newman) and does little to develop and test concepts. On the other hand, many of the practical, low-level decisions in InfoViz evaluations are problematic. Like most HCI researchers, we evaluate our own interfaces, use mostly simple outcome measures, rarely study the process of interaction, and select tasks somewhat randomly. This talk will outline the conceptual and practical challenges of evaluation and begin a discussion of how to overcome them.
Biography: Kasper Hornbæk received his M.Sc. and Ph.D. in Computer Science from the University of Copenhagen, in 1998 and 2002, respectively. Since 2009 he has been a professor with special duties in Human-centered Computing at University of Copenhagen. His core research interests are human-computer interaction, usability research, search user interfaces, and information visualization; detours include eye tracking, cultural usability, and reality-based interfaces. Kasper serves on the editorial boards of Journal of Usability Studies, Interacting with Computers, and International Journal of Human-Computer Studies (IJHCS). He has published at CHI, UIST, ACM Transactions on Computer-Human Interaction, and Human-Computer Interaction, and won IJHCS’s most cited paper award 2006-2008.
Materials: Presentation & Notes
The purpose of information visualization is to provide users with accurate visual representations of data and natural interaction tools to support discovery and sense making. These activities are often exploratory in nature and can take place over days, weeks or months and rarely follow a predefined or linear workflow. While the overall use of information visualizations is accelerating, the growth of techniques for the evaluation of these systems has been slow. To understand these complex behaviors, evaluation efforts should be targeted at the component level, the system level, and the work environment level. The commonly used evaluation metrics such as task time completion and number of errors appear insufficient to quantify the quality of an information visualization system; thus the name of the workshop: “beyond time and errors …”.
BELIV 2010 aims at gathering researchers in the field to continue the exploration of novel evaluation methods, and to structure the knowledge on evaluation in information visualization around a schema, where researchers can easily identify unsolved problems and research gaps.
This is the third edition of the BELIV workshop series. Based on feedback from past workshop participants, BELIV 2010 will be a 2-day workshop to provide a more interactive environment where participants can produce a research agenda to be published online.
Interested? Read Read How_To_Participate - Note: Submission Closed.
Enrico Bertini
University of Konstanz
Konstanz, Germany
Heidi Lam
Google Inc.
Mountain View, CA, USA
Adam Perer
IBM Haifa Research Lab
Mount Carmel, Haifa, Israel
Catherine Plaisant: (Univ. of Maryland, USA)
Giuseppe Santucci: (Sapienza Università di Roma, Italy)
Remco Chang (UNC Charlotte, USA)
Alan Dix (Lancaster University, UK)
Carla Dal Sasso Freitas (Instituto de Informatica UFRGS, Brazil)
Jean-Daniel Fekete (INRIA, France)
Georges Grinstein (UMass Lowell, USA)
Jeffrey Heer (Stanford, USA)
Nathalie Henry (Microsoft Research, USA)
Petra Isenberg (Univ. of Calgary, Canada)
Silvia Miksch (Vienna Univ. of Technology, Austria)
Tamara Munzner (Univ. of British Columbia, Canada)
Chris North (Virginia Tech, USA)
George Robertson (Microsoft Research, USA)
Jean Scholtz (Pacific Northwest National Lab, USA)
John Stasko (Georgia Tech, USA)
Jarke Van Wijk (TU Eindhoven, Netherlands)