Skip to content
 

Formalizing questions about feedback loops from model predictions

This is Jessica. Recently I asked a question about when a model developer should try to estimate the relationship between model predictions and the observed behavior that results when people have access to the model predictions. Kenneth Tay suggested a recent machine learning paper on Performative Prediction by Juan Perdomo Tijana Zrnic. Celestine Mendler-Dunner and Moritz Hardt. It comes close to answering the question and raises some additional ones.

My question had been about when it’s worthwhile, in terms of achieving better model performance, for the model to estimate and adjust for the function that maps from the predictions you visualize to the realized behavior. This paper doesn’t attempt to address when it’s worthwhile, but assumes that these situations arise and formalizes the concepts you need to figure out how to deal with it efficiently. 

It’s a theoretical paper, but they give a few motivating examples where reactions to model predictions change the target of the prediction: crime prediction changes police allocation changes crime patterns, stock price prediction changes trading activity changes stock price, etc. In ML terms, you get distribution shift, referring to the difference between the distribution you used to develop the model and the one that results after you deploy the model, whenever reactions to predictions interfere with the natural data generating process. They call this “performativity.” So what can be said/done about it? 

First, assume there’s a map D(.) from model parameters to the joint distributions over features (X) and outcomes (Y) they induce, e.g., for any specific parameter choice theta, D(theta) is the specific joint distribution over X and Y that you get as a result of deploying a model with parameters theta. The problem is that the model is calibrated given the data that has been seen prior to deploying it, not the data that results after its deployed. 

Typically in ML the way to deal with this is to retrain the model. However, maybe you don’t have to always do this. The key is to find the decision rule (here defined by the model parameters theta) that you know will perform well on the distribution D(theta) that you’re going to observe when you deploy the model. The paper uses a risk minimization framework to talk about two properties you want to find this rule. 

First you have to define the objective of finding the model specification (parameters theta) that minimizes loss over the induced distribution rather than the fixed distribution you typically assume in supervised learning. They call this “performative optimality.”

Next, you need “performative stability,” which is defined in the context of repeated risk minimization. Imagine a process defined by some update rule where you repeatedly find the model that minimizes risk (i.e., is performatively optimal) on the distribution you observed when you deployed the previous version of the model, D(theta_t-1). You’re looking for a fixed point in this risk minimization process (what I called visualization equilibrium).  

I like this formulation, and the implications of it for thinking about when this kind of thing is achievable. This gets closer to the question I was asking. The authors show that to guarantee that it’s actually feasibly to find the performative optima and performatively stable points exist, you need both your loss function and the map D(.) to have certain properties. 

First, loss needs to be smooth and strongly convex to guarantee a linear convergence rate in retraining to a stable point that approximately minimizes your performative risk. However, you also need the map D(.), to be sufficiently Lipschitz continuous, which constrains the relationship between the distance in parameter space between different thetas and the distance in response distribution space in the different distributions that get induced by those alternative thetas. Stated roughly, your response distribution can’t be too sensitive to changes to the model parameters. If you can get a big change in the response distribution from a small change in model parameters, you might not be able to find your performatively stable solution.  

This is where things get interesting, because now we can tie things back to real world situations and ask, when is this guaranteed? I have some hunches based on my reading of recent work in AI-human collaboration that maybe this doesn’t always hold. For example, some work has discussed how in situations where you have a person overseeing how model predictions are applied, you have to be careful about assuming that it’s always good to update your model because it improves accuracy. Instead, a more accurate model may lead to worse human/model “team” decision making if the newly updated model’s predictions conflict in some noticeable way from the human’s expectations about it. Instead you may want to aim for updates that won’t change the predictions to be so different from the previously deployed model predictions that the human stops trusting the model at all and making all the decisions themselves, because then you’re stuck with human accuracy on a larger proportion of decisions. So this implies that it may be possible for a small change in parameter space to result in a disproportionately large change in response distribution space. 

There’s lots more in the paper, including some analysis to show that it can be harder in general to get performative optimality than to find a performatively stable model. Again it’s theoretical, so it’s more about reflecting on what’s possible with different retraining procedures, though they run some simulations involving a specific game (strategic classification) to demonstrate how the concepts can be applied. It seems there’s been some follow-up work that generalizes to a setting where the distribution you get from some set of model parameters (a result of strategic behavior) isn’t deterministic but depends on the previous state. This setting makes it easier to think about response distribution shifts caused by “broken” mental models for example. At any rate, I’m excited to see that ML researchers are formalizing these questions, so that we have more clues of what to look for in data to better understand and address these issues.

Leave a Reply

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.