Chapter 5 Interpretable Models
The most straightforward way to get to interpretable machine learning is to use only a subset of algorithms that create interpretable models.
Very common model types of this group of interpretable models are the linear regression model, logistic regression and the decision tree.
In the following chapters we will talk about these models. Not in detail, only the basics, because there are already a ton of books, videos, tutorials, papers and more material. We will focus on how to interpret the models. This chapter covers linear models, logistic regression, and decision trees in more detail. It also lists some more.
All of the interpretable model types explained in this book are interpretable on a modular level, with the exception of the k-nearest neighbors method. The following table gives an overview over the interpretable model types and their properties. A model is linear if the association between features and target is modeled linearly. A monotonic model ensures that the relationship between a feature and the target outcome is always in the same direction over the whole range of the feature: an increase in the features value will consistently lead to either an increase or a decrease of the target outcome, but never both for this feature. Monotonicity is useful for the interpretation of a model, because it makes it easier to understand a relationship. Some models can automatically include interactions between the features for predicting the outcome. You can always include interactions into any kind of model by manually creating interaction features. This can be important for correctly predicting the outcome, but too many or too complex interactions can hurt interpretability. Some models only handle regression, some only classification, and some can manage to do both.
You can use this table to choose a suitable interpretable model for your task (either regression (regr.) or classification (class.)):
Algorithm | Linear | Monotone | Interaction | Task |
---|---|---|---|---|
Linear models | Yes | Yes | No | Regr. |
Logistic regression | No | Yes | No | Class. |
Decision trees | No | Some | Yes | Class. + Regr. |
RuleFit | Yes | No | Yes | Class. + Regr. |
Naive Bayes | Yes | Yes | No | Class.n |
k-nearest neighbours | No | No | No | Class. + Regr. |
Terminology
- Y is the target outcome.
- X are the features (also called variables, covariables, covariates, or inputs).
- \(\beta\) are regression weights (also called coefficients).