Evaluation in Information Visualization: Seven Scenarios
This project started as a project idea at Beliv 2008 in Florence, Italy. It grew considerably over the years and in our discussions and research on information visualization evaluation. Heidi, Enrico, and I went through 850 infovis papers from four publication venues (EuroVis, InfoVis, VAST, and the IVS journal) and distilled seven types of evaluation scenarios. With this broad survey and the distillation of these scenarios we make two contributions: One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one’s own study.
In the paper we list and explain the following seven scenarios:
The scenarios for understanding data analysis are:
- Understanding Environments and Work Practices(UWP)
- Evaluating Visual Data Analysis and Reasoning(VDAR)
- Evaluating Communication Through Visualization(CTV)
- Evaluating Collaborative Data Analysis (CDA)
The scenarios for understanding visualizations are:
- Evaluating User Performance (UP)
- Evaluating User Experience (UE)
- Evaluating Visualization Algorithms (VA)
We show that the prevalence of these scenarios in the literature we surveyed was quite skewed and that the last three (red) are by far the most common (but not necessarily the most important):
And here a temporal overview per scenario:
There are two errors in the following sentences of the article:
Nonetheless, the distributions of papers across the seven scenarios remain skewed towards three scenarios: Evaluating User Performance–UP (33%), Evaluating User Experience–UE (34%), and Evaluating Visualization Algorithms–VA (22%). Together, these three scenarios contribute to 85% of all evaluation papers over the years. This is in sharp contrast to the 15% share of the process scenarios.
It should be: a total of 89% for the visualization scenarios and only a 11% share for the process scenario. Thanks to Craig Anslow for pointing this out
|Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant, and Sheelagh Carpendale (2012) Empirical Studies in Information Visualization: Seven Scenarios. IEEE Transactions on Visualization and Computer Graphics, 18(9):1520–1536, September 2012. Appeared online: 30 Nov. 2011. Supersedes an earlier techreport.|| doi|
The techreport below is an early version but the article above has more accurate and updated numbers and has generally been improved in writing and presentation (thanks to excellent reviewer comments).
|Heidi Lam, Enrico Bertini, Petra Isenberg, and Catherine Plaisant and Sheelagh Carpendale (2011) Seven Guiding Scenarios for Information Visualization Evaluation. Techreport 2011-992-04, Department of Computer Science, University of Calgary, January 2011. Superseded by and improved in a follow-up journal article.|| pdf|