Explanation Techniques for a Random Forest that predicts stroke

Introduction

To say that AI/ML is going to change the future is an under appreciation of the current state of deployment of the technology. Already today, machine learning models are used in many different and critical domains.

ML/AI engineers refer to models being able to make predictions with probabilities. What we are beginning to see though is that the technology is transition from prediction making to decision making without humans in the loop. Decisions that can often severely negatively impact individuals. Use cases include:

  1. Automatic assessment of credit worthiness
  2. Automation in the justice system
  3. Automation of HR hiring processes
  4. Digital service platforms (such as food delivery service) that route assignments to their workers in an automatised fashion

In such examples, transparency in the algorithmic decision making process becomes essential. Being negatively affected by an automatic decision immediately elicits questions: Is it fair decision made for the right reasons? What do I need to change in order to get a decision in my favour? It is indeed a very human reflex to ask: "why?"

Transparency has an additional role: without understanding why a prediction is made, it becomes difficult to assess whether it is a fair or robust decision, and it will be hard to hold someone accountable for it (Doshi-Velez & Kim, 2017). It is a key-ingredient in creating ethical AI.

Transparency and interpretability enable many of properties we expect from trustworthy AI.

It is worth noting that a legal argument can be made that the GDPR legislation contains a “right for explanation” that can be called upon in the case of high impact automatic decision making (Goodman & Flaxman, 2017). While this is debatable, recently proposed legislation in the EU ( AI-Act and Digital Service Act ) make it very clear that algorithmic transparency for high risk scenarios will become a legal necessity.

In this article, explanation techniques for tabular data are explored. This is the typical data kind that is used in automatic decision making process such as in the aforementioned examples.

The experiment

To gain first hand experience in tabular explanation techniques, I first needed a representative predictive model. I used a dataset that contains data to predict stroke (Fedesoriano, 2021). It contains 11 features such as gender, age, pre- existing medical conditions, and social factors such as living environment or work type. The dataset contains 5110 observations.

Next step was to do some basic pre-processing of the data, standardise features and handle missing data. Once this was completed, I created a “model” with Scikit-Learn. In particular, I constructed a Random Forest which is for tabular data a suitable and typical choice. The accuracy of the model is 94%. However, there is a poor recall rate of 14%. The recall baseline is 5% making the model not completely random. I will not dive into reasons why this is happening but use this model to try out explanation techniques.

Explanation techniques

To make it easy to compare different techniques, I created explanations for the same two individuals in the dataset. The first one is a young lad who did not have a stroke. The second one is a senior citizen who did experience a stroke. The random forest made a prediction for these individuals that matched the outcome. In jargon: the predicted target matched the true label. Let’s now find out why the random forest made these predictions using several explanation techniques.

Feature importance

Feature importance maps aim to give insight as to what features (=variables in the dataset) were of most importance when a particular decision was made. There are different libraries and algorithms available to compute these maps. I used InterpretML which is created and maintained by Microsoft Research. In particular I used the SHAP algorithm.

Feature importance map for our two individuals

What this graph is telling you, is that in individual 1, the negative stroke prediction is mostly due to the young age. In individual 2, we see that both age and hypertension (a medical pre-condition) are strong influencers of the positive stroke prediction.

Anchors

Anchors, developed by Ribeiro et al. (2018) aims to find boundaries in which certain predictions are consistent. It thus provides insight as to how typical a certain individual and its prediction are. I used the Alibi Explain library to compute these.

Anchors for our two individuals.

The image above is a visualisation of what Anchors provides. It states that individual 1 can be placed in a bucket of people who are under 42 years old and never married who always predict not having a stroke. This rule applies to 16% off the dataset. This simple rule can be remembered easily. For individual 2, the rule is not so simple. In fact, it tells us a large combination of specific factors must occur to predict stroke. This makes sense as the dataset is highly unbalanced with only few individuals with stroke.

It is of course a bit humorous that marriage status is such an important factor, but that is indeed the risk if placing such feature in the dataset: the model might end up using it. Assuming that marriage status is indeed some sort of proxy variable, this example is illustrative for the power of this technique. It made it  transparent that the model is using marriage status. This speaks against the deployment of this model and stipulates further investigation into the dataset.

Counterfactuals

Imagine a scenario where your creditworthiness is negatively assessed by an algorithmic decision making process. You want to  i) know what were the reasons for this ii) be able to content the decision if you feel it is incorrect and iii)  know what you need to change in order to have a positive assessment. This is what counterfactuals can do. It is my impression that counterfactuals receive a strong emphasis in research because it is the only technique available that deliverables an actionable outcome for the user.

I used the Diverse Counterfactual Explanations library (DICE) to create counterfactuals.

Counterfactuals for our two individuals.

The image is a visualisation of what the library outputs. Let’s focus our attention on the second individual who is 80 years of age and is predicted to have a stroke. The counterfactual states that the simplest course of action to change the risk of stroke, is to become 57 years of age. This is a bit of a weird statement since you cannot change your age. Setting which variables are mutable is something you can and should configure as well, which I did not do.

For individual 1, a whole set of variables need to drastically change to become at risk for a stroke. Interestingly, the counterfactual almost describes individual 2, who indeed did suffer from a stroke.

This example also highlights a conceptual problem with the method, as not all features are “easily actionable”, even if mutable. For instance, should the counterfactual state to change your residence type, this is for some much harder than for instance stopping smoking. And should the person even follow the advice, the change might result in additional stress which is another important risk factor. Thus, in real life “features” are not isolated but intertwine. This is something the counterfactual does not consider. Overall though, I do believe it delivers useful insight. Especially in interactive settings where users can explore different outcomes, might reveal insightful coarse of actions.

Explainable boosting machines

The previously described methods are what is referred to as “post-hoc”. They assume an existing model and add explanations by creating predictions and analysing the result. This creates a problem in itself, as the method might not be truthful to the inner decision rules of the model. This is where intrinsic explainable models step in: they deliver explanations without the use of an additional method or library. This makes it easier to trust the result itself. An example of such a model is Explainable Boosting Machines, made available in the InterpretML library. I trained such a model using the stroke dataset.

However, I did not hypertune the model for the same accuracy rates (takes quite some time to train this puppy!), resulting in a completely different model with possible different decision rules.

Results of Explainable Boosting Machines.

The explanations indeed show that a completely different model is constructed. For instance, individual 2 is not predicted of getting a stroke (misclassification).  Looking at the feature importance maps though, we can see that residence type plays a critical role in both predictions. This is somewhat un-intuitive and tells us we need to re-examine our model or dataset. Thus, it is already very useful even if the accuracy metrics are off.

Conclusion

I discussed 4 different methods to create explanations for tabular data. All come with their own advantages and disadvantages. Overall, these explanations did provide very useful insights into the operations of the model. However, trust is required that the methods  are truthful to the decision rules the model actually uses.

Picking which method is suitable depends on the use-case and the users interpreting  them. Use user research to find out what your users are looking for first and choose and design your method next.

By using the available libraries you can already create “prototypes” to gather feedback from your users. This is a great first step to design your own interpretable and ethical AI solution.

References

Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. http://arxiv.org/abs/1702.08608

Fedesoriano. (2021). Stroke Prediction dataset, version 1. https://www.kaggle.com/fedesoriano/stroke- prediction-dataset

Goodman, B., & Flaxman, S. (2017). European union regulations on algorithmic decision making and a “right to explanation.” AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17-Augu, 1135–1144. https://doi.org/10.1145/2939672.2939778