Talks Feb 11 (Princeton) and Feb 18 (Stanford) on benchmarking human decisions from predictions

Statistical Modeling, Causal Inference, and Social Science 2025-02-10

This is Jessica. I’m giving talks this Tuesday and next Tuesday on decision theoretic approaches for combining human domain knowledge with statistical models. I’ll discuss various joint projects with Ziyang Guo, Yifan Wu, and Jason Hartline. Come by if you’re around Princeton or Stanford campuses!

Benchmarking decisions from visualizations and predictions Tues Feb 11, 12pm Seminar in Advanced Research Methods, Princeton Department of Psychology

How well does a particular information display support decision-making? This question comes up when studying human behavior under different strategies for presenting information (e.g., forecast displays, data visualizations, displays or explanations of model predictions) and in our own research when we must decide how to plot our results or report effects. Understanding how helpful a visualization or other presentation is for judgment and decision-making is difficult because the observed performance in an experiment is confounded with aspects of the study design, such as how useful the information that is provided is for the task. Typical approaches to designing such studies make it difficult to assess how well study participants did relative to the best attainable performance on the task, and to diagnose sources of error in the results. I will discuss how decision-theoretic frameworks that conceive of the performance of a Bayesian rational agent can transform how we design and evaluate visualizations and other decision-support interfaces, such as explanations of model predictions.

The value of information in model-assisted decision-making Tues Feb 18, 4:30 pm Statistics Seminar, Stanford Department of Statistics

The widespread adoption of AI and machine learning models in society has brought increased attention to how model predictions impact decision processes in a variety of domains. I will describe tools that apply statistical decision theory and information economics to address pressing question at the human-AI interface. These include: how to evaluate when a decision-maker appropriately relies on model predictions, when a human or AI agent could better exploit available contextual information, and how to evaluate (and design) prediction explanations. I will also discuss some cases where statistical theory falls short of providing insight into how people may use predictions for decisions.