Explainable ML versus Interpretable ML

This post is by Keith O’Rourke and as with all posts and comments on this blog, is just a deliberation on dealing with uncertainties in scientific inquiry and should not to be attributed to any entity other than the author. As with any critically-thinking inquirer, the views behind these deliberations are always subject to rethinking and revision at any time.

First, I want to share something I was taught in MBA school –  all new (and old but still promoted) technologies exaggerate their benefits, are overly dismissive of difficulties, underestimate the true costs and fail to anticipate how older (less promoted) technologies can adapt and offer similar and/or even better benefits and/or with less difficulties and/or less costs.

Now I have recently become aware of work by Cynthia Rudin (Duke) that argues upgraded versions of easy to interpret machine learning (ML) technologies (e.g. Cart) can offer similar predictive performance of new(er) ML (e.g. deep neural nets) with the added benefit of interpret-ability.  But I am also trying to keep in mind or even anticipate how newer ML (e.g. deep neural nets)  can adapt to (re-)match this.

Never say never.

The abstract from Learning customized and optimized lists of rules with mathematical programming. Cynthia Rudin and Seyda Ertekin may suffice to provide a good enough sense for this post.

We introduce a mathematical programming approach to building rule lists, which are a type of interpretable, nonlinear, and logical machine learning classifier involving IF-THEN rules. Unlike traditional decision tree algorithms like CART and C5.0, this method does not use greedy splitting and pruning. Instead, it aims to fully optimize a combination of accuracy and sparsity, obeying user-defined constraints. This method is useful for producing non-black-box predictive models, and has the benefit of a clear user-defined tradeoff between training accuracy and sparsity. The flexible framework of mathematical programming allows users to create customized models with a provable guarantee of optimality. 

For those with less background in ML, think of regression trees or decision trees (Cart) on numerical steroids.

For those with more background in predictive modelling this may be the quickest way to get a sense of what is at stake (and the  challenges). Start at 17:00 and its done by 28:00 – so 10 minutes.

My 9 line summary notes of Rudin’s talk (link above): Please stop doing “Explainable” ML [for high-stakes decisions].

Explainable ML – using a black box and explaining it afterwards.
Interpretable ML – using a model that is not black box.

Advantages of interpret-able ML are mainly for high-stakes decisions.

Accuracy/interpretabilty tradeoff is a myth – in particular for problems with good data representations – all ML methods perform about the same.

[This does leave many application areas where it is not a myth and Explainable or even un-explainable ML will have accuracy advantages.]

Explainable ML is flawed, there are two models the black box model and an understudy model that is explainable and predicts similarly but not identically (exactly the same x% of the time). And sometimes the explanations do not make sense.

p.s. Added a bit more about the other side: problematic obsession with transparency and arguments why “arguments by authority” [black boxes] although the worst kind of arguments are all that most people will accept and make use of  here

p.s2. Just picked up same nice explanation about explanation from Stephen Wolfram’s post . At the very end there is an insightful paragraph which I’ll quote a couple sentences from.  “If we choose to interact only with systems that are computationally much simpler than our brains, then, yes, we can expect to use our brains to systematically understand what the systems are doing … But if we actually want to make full use of the computational capabilities that our universe makes possible, … —we’ll never be able to systematically “outthink” or “understand” those systems … But at some level of abstraction we know enough to be able to see how to get purposes we care about achieved with them.”

There were two issues that Rudin identified in their linked talk that might be worth carefully thinking about:

1.Given a choice between an interpretable model and a black box model – many end users actually prefer the black box.

2. Applied statisticians have always cared about interpretability but ML has not? “Statisticians can fix this. It won’t happen in ML.”

Now some reasons for 1. are discussed here To better enable others to avoid being misled when trying to learn from observations, I promise not be transparent, open, sincere nor honest?

As for ML not caring about interpretability there was a lot work done in AI in the 1980s trying to discern if human learning was interpretable or just explainable – Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA, US: The MIT Press.

Recalling the never say never, some might be interested in this Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions 

 

12 thoughts on “Explainable ML versus Interpretable ML

  1. Keith:

    I don’t think that the fits from Cart or similar methods are “easy to interpret.” Or, to put it more carefully, the fits may well be easy to interpret wrongly, but I don’t think they’re easy to understand; I think they’re misleading in that they can look interpretable even though they’re not. I’m not saying these methods are useless for prediction; I’m just saying that I think it’s generally a mistake to take the fit obtained from such a procedure and start interpreting it.

    Indeed, one could say that this is a problem with so much of statistics, that people take a fit and then tell stories from it. For example, lots of people want to say that predictor X does or does not matter. See the discussion starting on page 960 of this article; there I’m taking about causality but the same issues arise with prediction.

    • Andrew:

      The interpretability was not meant to refer to grasping why data generated/collected this way enables prediction and in particular why the individual predictions are most appropriate. That would be like “learning causal structure itself” as you put it on page 960.

      So not take “the fit obtained from such a procedure and start interpreting it” but rather to be able to easily discern from say a given rule set why the individual ended up with exactly that fit (and how they would have been somewhere different with different inputs).

      That is interpreting the map from inputs to fit rather interpreting fit as a map to “reality”. Interpreting the map from inputs to fit for say deep neural nets is considered by some (e.g. Geoff Hinton) as essentially hopeless so they fit Cart like models to mimic the predictions of a deep neural net to get some explanation of the map from inputs to fit.

      If we follow Leonard Cohen’s advice (1.) and clearly distinguish the representation (predictive model) from what is taken to represent (empirical expectations) – in principle the representation as a model on its own can be fully understood (its just math) and some are simple enough they can be understood in practice.

      Perhaps the main advantage of predictive models that can be understood in practice is that one and others can repeatedly discern and manipulate them to more fully grasp (criticize) them and especially make changes to them (creative modifications).

      1. There is the word and there is the butterfly. If you confuse these two items people have the right to laugh at you. https://acephalous.typepad.com/acephalous/leonard-cohen-how-to-spea.html

      • I work in financial services, and this issue frequently pops up. If we deny credit based on a model, regulators want to know why the person was denied credit. This is trivial in a CART model and next to impossible in a DNN.

  2. Rudin’s “list of rules” model is a classifier in Frank Harrell’s sense (as opposed to probabilistic prediction e.g., logistic regression) and so doesn’t easily lend itself to a Bayesian interpretation; and yet she writes that the paper is “motivated by Bayesian approaches (Rudin et al, 2011, 2013; Letham et al, 2015; Yang et al, 2017; Wang and Rudin, 2015), with earlier closely related ideas in the PhD thesis of Allison Chang (Chang, 2012)”.

    At some point soon I need to track down these references; like Justin, I work in a field where interpretability in Rudin’s sense is highly prized.

    • > motivated by Bayesian approaches
      Her approach (from some email exchanges) seems to be: try hard to do full Bayes but when algorithms built in those attempts are realised to have good optimisation properties on their own, switch to those for now and try to do full Bayes later. A recent paper being https://arxiv.org/abs/1809.03541

    • It’s Bayesian in the sense that they define a prior distribution for parameters the decision list itself. Namely for the parameters like number of rules and number of conditions in a rule. Then they draw from the posteriori distribution of decision rules and return the best.

  3. One nice example is the “Scalable and accurate deep learning for electronic health records” paper (https://dx.doi.org/10.1038/s41746-018-0029-1 or https://arxiv.org/abs/1801.07860), where if you look into the appendix you see that a pretty simple logistic regression with a relatively simple feature space performed almost as well as a super-complex deep learning model. You would guess that the features for the LR can be optimized some more and then it would probably on more or less the same level.

  4. Thanks for the interesting discussion and links.

    I watched the video, then went to Rudin’s web site, found a lot of cool papers, read a few of them, downloaded their Matlab SLIM optimizer from GITHUB (https://github.com/ustunb/slim-matlab), and did a small study on clinical data from my lab … all before lunch :)

    Hooray for this blog (and the internet – but not the bad, democracy ruining part). That’s really all I wanted to say – just a shout out/thanks to Dr. Gelman for providing this resource.

    ps: this is from someone who has struggled for decades to get physicians to “buy in” to most AI/ML approaches like random forest, CNN, etc… The push for interpretable AI is a really interesting recent development.

    • My struggle is to advise folks not to accept arguments for advantages of using neural nets for data sets with about 100 observations and deep neural nets with about 3000 observations. Fortunately in the one with 3000, it was noticed that nearest neighbours had similar mse with more uniform error rates.

      NNs and Deep NNs are currently all the rage and the expected uncritical hyped adoption is happening.

    • On the other hand, there is this other side “a problematic/distracting obsession with transparency”.

      For instance in medicine “it’s not always possible to know why the computer made the decision it did – a prospect that initially sounds troubling when it comes to aiding physicians … Hinton uses the example of a written number to illustrate how both humans and neural networks interpret information without being able to explain how they did it” https://www.utoronto.ca/news/recipe-save-lives-geoffrey-hinton-and-david-naylor-call-physicians-embrace-ai

      That issue was pointed to at the end of my post with the comment about a preference for black boxes, my much earlier post involving the promise not to be transparent and earlier work on discerning human learning by Simon.

      Not sure on my take on it, but given Peirce provided good arguments why “arguments by authority” although the worst kind of arguments are all that most people will accept and make use of – it make sense.

      Arguments based on good performance are mostly by authority (choice of measure of performance and also often assessment of it) and it may well be the case that most physicians will accept these but not interpretable ones – at least as is. That is, they may start to second guess the interpretable ones and not use them or modify them arbitrarily.

      So in science it has to be “take no one’s ones word for it” in practice it might need to be “do take this group’s word for it”.

      The need and challenge to manage those groups well then becomes the challenge. One route would be to make that group develop an interpretable one (if its performance is adeqaute) that can be examined/approved and then put it a black box for general use.

  5. Keith – thanks for the link/summary etc (wd also echo Michael S’s remarks about this blog being a great example of the internet actually working well).

    Re interpretability v. explicability with ML, this is now regarded (by some at least) as a major legalistic issue, because of the requirement under the European GDPR for plain-English explanations of decisions reached by an algorithm. For a quick intro, see eg https://bit.ly/2Jw0qKQ. (BTW, despite being EU in origin, it has potentially global reach as it refers to processing affecting an EU citizens, wherever they, or the processing, reside).

    Given the huge fines that can be levied under the GDPR, the issue of being able to tell whether an interpretation/explanation of an ML decision merely sounds plausible but is in fact post hoc BS may well find its way into courts before too long.

Leave a Reply to Keith O’Rourke Cancel reply

Your email address will not be published. Required fields are marked *