8.2 The Future of Interpretability
Let’s take a look at where machine learning interpretability might be going.
The focus will be on model-agnostic interpretability tools.
It is way easier to automate interpretability if it is decoupled from the underlying machine learning model. The advantage of model-agnostic interpretability is the modularity: We can easily replace the underlying machine learning model. We can just as easily replace the interpretability method. For these reasons, model-agnostic methods will scale much better. That’s why I believe that model-agnostic methods will become more dominant in the long term. But intrinsically interpretable methods will also have their place.
Machine learning will be automatic and so will be interpretability.
An already visible trend is the complete automation of model fitting: That includes the automated engineering and selection of features, automated hyperparameter optimization, comparison of different models, and ensembling or stacking of the models. The result is the best possible prediction model. When we use model-agnostic interpretation methods, we can automatically apply them on any model that emerges from the automated machine learning process. In a way, we can automate this second step as well: Automatically compute the feature importance, plot the partial dependence, fit a surrogate model and so on. Nobody stops you from automatically computing all these model interpretations. Humans will still be needed for the actual interpretation. Imagine yourself: You only upload a dataset, specify the prediction goal and, at the push of a button, the best prediction model is fitted and the program spits out all interpretations of the model. Solutions already exist and I argue that it will be sufficient for many applications to use these automated machine learning services. Today anyone can build websites without knowing HTML, CSS and Javascript but there are still web developers around. Similarly, I believe anyone will be able to train machine learning models without knowing how to program and there will still be a need for machine learning experts.
We don’t analyze data, we analyze models.
The raw data itself is always useless (I exaggerate on purpose). I don’t care about the data; I do care about the knowledge distilled from the data. Interpretable machine learning is a great way to distill knowledge from data. You can probe the model extensively, the model automatically recognizes if and how features are relevant to the prediction (many models have built-in feature selection), the model can automatically recognize how the relationships are represented the best and - if trained correctly - the final model is the best possible approximation to reality.
Many analytical tools are already based on data models (because they are based on distribution assumptions):
- Simple hypothesis tests like Student’s t-test.
- Hypothesis tests with adjustments for confounders (usually GLMs)
- Analysis of Variance (ANOVA)
- The correlation coefficient (the standardized linear regression coefficient is the same as Pearson’s correlation coefficient)
- …
So what I am telling you here is actually nothing new. So why switch from analyzing assumption-based, transparent models to analyzing assumption-free black box models? Because making all these assumptions is problematic: They are usually wrong (unless you believe that most of the world follows a Gaussian distribution), difficult to check, very restricting for the relationships the model can represent and hard to automate. Assumption-based models typically have worse predictive performance on untouched test data than black box machine learning models. This is only true for big data sets, since interpretable models with good assumptions will perform better than black box models with many parameters. The black box machine learning approach needs a lot of data to work well. Because of the digitization of everything, we will have bigger and bigger datasets and therefore the approach of machine learning becomes more attractive: We don’t make assumptions, we approximate reality as closely as possible (while avoiding overfitting on the training data). I argue that we should develop all the tools that we have in statistics to answer questions (hypothesis tests, correlation measures, interaction measures, visualization tools, confidence intervals, p-values, prediction intervals, probability distributions) and rewrite them for black box models. In a way, this is already happening:
- Let’s take a classic linear model: The standardized regression coefficient is already a feature importance measure. With the permutation feature importance measure, we have a tool that works with any model.
- In a linear model, the coefficients measures the effect of a single feature on the predicted outcome. The generalized version of this is the partial dependence plot.
- Test whether A or B is better: For this we can also use partial dependence functions. What we don’t have yet (to the best of my best knowledge) are statistical tests for arbitrary black box models.
The data scientists will automate themselves.
I believe that data scientists will eventually automate themselves out of the job for many analysis and forecasting tasks. For this to happen the tasks must be well-defined and there must to be some processes and routine around them. Today, these routines and processes are missing, but data scientists and colleagues are working on them. As machine learning becomes an integral part of many industries and institutions, many of the tasks that are currently being figured out will be automated.
Robots and programs will explain themselves.
We need more intuitive interfaces to machines and programs that make heavy use of machine learning. Some examples: A self-driving car that reports why it stopped abruptly (“70% probability that a kid will cross the road”); A credit default program that explains to a bank employee why a credit application was rejected (“Applicant has too many credit cards and is employed in an unstable job.”); A robot arm that explains why it moved the item from the conveyor belt into the trash bin (“The item has a craze at the bottom.”).
Interpretability could boost machine intelligence research.
I can imagine that by doing more research on how programs and machines can explain themselves, we can improve improve our understanding of intelligence and we will become better at creating intelligent machines.
In the end, all these predictions are speculations we have to see what the future really brings. Form your own opinion and continue learning!