This post is by Eric.
This Thursday, at 12 pm ET, Jessica Hullman is stopping by to talk to us about theories of inference for data interactions. You can register here.
Abstract
Research and development in computer science and statistics have produced increasingly sophisticated software interfaces for interactive and exploratory analysis, optimized for easy pattern finding and data exposure. But design philosophies that emphasize exploration over other phases of analysis risk confusing a need for flexibility with a conclusion that exploratory visual analysis is inherently “model free” and cannot be formalized. I will motivate how without a grounding in theories of human statistical inference, research in exploratory visual analysis can lead to contradictory interface objectives and representations of uncertainty that can discourage users from drawing valid inferences. I will discuss how the concept of a model check in a Bayesian statistical framework unites exploratory and confirmatory analysis, and how this understanding relates to other proposed theories of graphical inference. Viewing interactive analysis as driven by model checks suggests new directions for software and empirical research around exploratory and visual analysis, as well as important questions about what class of problems visual analysis is suited to answer.
About the speaker
Jessica Hullman is an Associate Professor of Computer Science at Northwestern University. Her research looks at how to design, evaluate, coordinate, and theorize representations for data-driven decision making. She co-directs the Midwest Uncertainty Collective, an interdisciplinary group of researchers working on topics in visualization, uncertainty communication and human-in-the-loop data analysis, with Matt Kay. Jessica is the recipient of a Microsoft Faculty Fellowship, NSF CAREER Award, and multiple best papers at top visualization and human-computer interaction conferences.
The video is available here.
The relevant paper is here.
Renaming my rock band Midwest Uncertainty Collective.
Wins thread. (Granted, it’s a short thread.)
Nice overview of exploratory data analysis and graphical inference.
I was taking this “replicated data under a model” as the new replacement for fake-data but then noticed “fake-data simulations” was used further down.
Perhaps some more discussion of choosing between prior and posterior predictive checks?
Looking forward to the talk.
Oof was this recorded? I forgot about it :(