Prediction Explanation
Shows how each of the variables influenced a single prediction done by our model
Last updated
Shows how each of the variables influenced a single prediction done by our model
Last updated
This explanation allows to dive deeper into our model. It represents the impact that each variable had on a single prediction. Furthermore, it also captures interactions between variables. In other words, if the impact of two combined variables will be considered if it is larger than the sum of their separate impacts.
Remember that EXPAI also allows you to explain the prediction against a limited and meaningful subgroup of your data.
Information from this explanation can be used for different purposes.
For business
Quick prediction understanding: usually in automated processes, it is difficult to understand why predictions are being made. However, this information might be really valuable to make optimal decisions. For instance, a commercial may need to know why a customer is prone to leave the company to make a custom offer.
Accountability: it will be easy to explain others why a certain prediction was made.
Error detection: even if you don't understand AI, it will be easy to spot and report incorrect predictions.
Ease AI adoption: non-technical users will be more likely to adopt AI if they are able to understand how it works.
Easy in-depth understanding: this explanation makes it easier to get an in-depth understanding about how the model works and to spot shortcomings.
This interaction explanation is presented in detail in Gosiewska et al. (2019).
We implement state-of-the-art techniques to explain samples. Unlike other approaches, we are able to detect interactions between variables. In other words, we can identify if the impact of two combined variables is larger than the sum of their separate impacts. This results in more stable and reliable explanations.
To do this, we first compute the impact of each feature to the prediction. Then, we calculate the impact of every pair of features. Now, we detect interaction between features by checking whether their combined impact is greater than the sum of their independent contributions. Finally, we order the variables and represent them on a plot.
Our method is based on local explanations with interactions presented by Gosiewska et al. (2019). Unlike other methods such as SHAP or LIME, this strategy allows to handle interactions between variables. This ensures more stable and reliable explanations.
As presented in the previously mentioned paper, the algorithm does the following:
Calculate a single-step additive contribution for each feature.
Calculate a single-step contribution for every pair of features. Subtract additive contribution to assess the interaction specific contribution.
Order interaction effects and additive effects in a list that is used to determine sequential contributions.
For more details on the algorithm and derivations, see Section 4 in Gosiewska et al. (2019).
Gosiewska, Alicja, and Przemyslaw Biecek. 2019. iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models. https://arxiv.org/abs/1903.11420v1.