EXPAI Docs

Searchâ€¦

Python Client Docs

Powered By GitBook

Prediction Explanation

Shows how each of the variables influenced a single prediction done by our model

Sample explanation

What are we trying to explain?

This explanation allows to dive deeper into our model. It represents the impact that each variable had on a single prediction. Furthermore, it also captures interactions between variables. In other words, if the impact of two combined variables will be considered if it is larger than the sum of their separate impacts.

Remember that EXPAI also allows you to explain the prediction against a limited and meaningful subgroup of your data.

Why is it useful?

Information from this explanation can be used for different purposes.

For developers

How we do it

Plain English

We implement state-of-the-art techniques to explain samples. Unlike other approaches, we are able to detect interactions between variables. In other words, we can identify if the impact of two combined variables is larger than the sum of their separate impacts. This results in more stable and reliable explanations.

To do this, we first compute the impact of each feature to the prediction. Then, we calculate the impact of every pair of features. Now, we detect interaction between features by checking whether their combined impact is greater than the sum of their independent contributions. Finally, we order the variables and represent them on a plot.

More formally

Our method is based on local explanations with interactions presented by Gosiewska et al. (2019). Unlike other methods such as SHAP or LIME, this strategy allows to handle interactions between variables. This ensures more stable and reliable explanations.

As presented in the previously mentioned paper, the algorithm does the following:

1.

Calculate a single-step additive contribution for each feature.

2.

Calculate a single-step contribution for every pair of features. Subtract additive contribution to assess the interaction specific contribution.

3.

Order interaction effects and additive effects in a list that is used to determine sequential contributions.

References

Gosiewska, Alicja, and Przemyslaw Biecek. 2019. *iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models*. https://arxiv.org/abs/1903.11420v1.

Last modified 4mo ago