Two talks about robust objectives for visualization design and evaluation

This is Jessica. I’ll be giving a talk twice this week, on the topic of how to make data visualizations more robust for inference and decision making under uncertainty. Today I’m speaking at the computer science seminar at University of Illinois Urbana Champaign, and  Wednesday I’ll be giving a distinguished data science lecture at Cornell. In the talk I consider what’s a good objective to use as a target in designing and evaluating visualization displays, one that is “robust” in the sense that it leads us to better designs even if people don’t use the visualizations as intended. My talk will walkthrough what I learned from using effect size judgments and decisions as a design target, and how aiming for visualizations that facilitate implicit model checks can be a better target for designing visual analysis tools. At the end I jump up a level to talk about what our objectives should be when we design empirical visualization experiments. I’ll talk about a framework we’re developing that uses the idea of a rational agent with full knowledge of a visualiation experiment design to create benchmarks that can be used to determine when an experiment design is good (by asking e.g., is the visualization important to do well on the decision problem under the scoring rule used?) and which can help us figure out what causes losses in observed performance by participants in our experiment.

4 thoughts on “Two talks about robust objectives for visualization design and evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *