Getting Started with xplainfi

library(xplainfi)
library(mlr3)
library(mlr3learners)
library(data.table)
library(ggplot2)

xplainfi provides feature importance methods for machine learning models. It implements several approaches for measuring how much each feature contributes to model performance, with a focus on model-agnostic methods that work with any learner.

Core Concepts

Feature importance methods in xplainfi address different but related questions:

All methods share a common interface built on mlr3, making them easy to use with any task, learner, measure, and resampling strategy.

The general pattern is to call $compute() to calculate importance (which always re-computes), then $importance() to retrieve the aggregated results, with intermediate results available in $scores() and, if the chosen measures supports it, $obs_loss().

Basic Example

Let’s use the Friedman1 task, which provides an ideal setup for demonstrating feature importance methods with known ground truth:

task <- tgen("friedman1")$generate(n = 300)
learner <- lrn("regr.ranger", num.trees = 100)
measure <- msr("regr.mse")
resampling <- rsmp("cv", folds = 3)

The task has 300 observations with 10 features. Features important1 through important5 truly affect the target, while unimportant1 through unimportant5 are pure noise. We’ll use a random forest learner with cross-validation for more stable estimates.

The target function is: \(y = 10 * \operatorname{sin}(\pi * x_1 * x_2) + 20 * (x_3 - 0.5)^2 + 10 * x_4 + 5 * x_5 + \epsilon\)

Permutation Feature Importance (PFI)

PFI is the most straightforward method: for each feature, we permute (shuffle) its values and measure how much model performance deteriorates. More important features cause larger performance drops when shuffled.

pfi <- PFI$new(
    task = task,
    learner = learner,
    measure = measure,
    resampling = resampling
)

pfi$compute()
pfi$importance()
#> Key: <feature>
#>          feature   importance
#>           <char>        <num>
#>  1:   important1  5.087261575
#>  2:   important2  7.934989146
#>  3:   important3  1.091293042
#>  4:   important4 10.924662281
#>  5:   important5  2.432564965
#>  6: unimportant1 -0.024310146
#>  7: unimportant2  0.159875469
#>  8: unimportant3  0.023231148
#>  9: unimportant4  0.000284637
#> 10: unimportant5 -0.039391565

The importance column shows the performance difference when each feature is permuted. Higher values indicate more important features.

For more stable estimates, we can use multiple permutation iterations per resampling fold with n_repeats. Note that in this case “more is more”, and while there is no clear “good enough” value, setting n_repeats to a small value like 1 will most definitely yield unreliable results.

pfi_stable <- PFI$new(
    task = task,
    learner = learner,
    measure = measure,
    resampling = resampling,
    n_repeats = 50
)

pfi_stable$compute()
pfi_stable$importance()
#> Key: <feature>
#>          feature  importance
#>           <char>       <num>
#>  1:   important1  5.69331523
#>  2:   important2  8.14763339
#>  3:   important3  1.18196174
#>  4:   important4 11.99249476
#>  5:   important5  2.04693042
#>  6: unimportant1 -0.01206658
#>  7: unimportant2 -0.01718936
#>  8: unimportant3 -0.08372375
#>  9: unimportant4  0.08570232
#> 10: unimportant5 -0.05251443

We can also use ratio instead of difference for the importance calculation, meaning that an unimportant feature is now expected to get an importance score of 1 rather than 0:

pfi_stable$importance(relation = "ratio")
#> Key: <feature>
#>          feature importance
#>           <char>      <num>
#>  1:   important1  1.8377409
#>  2:   important2  2.2040144
#>  3:   important3  1.1752990
#>  4:   important4  2.7584425
#>  5:   important5  1.3009155
#>  6: unimportant1  0.9985212
#>  7: unimportant2  0.9970989
#>  8: unimportant3  0.9881171
#>  9: unimportant4  1.0123079
#> 10: unimportant5  0.9919494

Leave-One-Covariate-Out (LOCO)

LOCO measures importance by retraining the model without each feature and comparing performance to the full model. This shows the contribution of each feature when all other features are present.

loco <- LOCO$new(
    task = task,
    learner = learner,
    measure = measure,
    resampling = resampling
)

loco$compute()
loco$importance()
#> Key: <feature>
#>          feature importance
#>           <char>      <num>
#>  1:   important1  3.4607216
#>  2:   important2  5.8198340
#>  3:   important3  0.9363660
#>  4:   important4  7.4402008
#>  5:   important5  0.4303848
#>  6: unimportant1 -0.3244184
#>  7: unimportant2 -0.1915122
#>  8: unimportant3 -0.1675775
#>  9: unimportant4 -0.1774725
#> 10: unimportant5 -0.4084427

LOCO is computationally expensive as it requires retraining for each feature, but provides clear interpretation: higher values mean larger performance drop when the feature is removed. However, it cannot distinguish between direct effects and indirect effects through correlated features.

Feature Samplers

For advanced methods that account for feature dependencies, xplainfi provides different sampling strategies. While PFI uses simple permutation (marginal sampling), conditional samplers can preserve feature relationships.

Let’s demonstrate conditional sampling using adversarial random rorests (ARF), which preserves relationships between features when sampling:

arf_sampler <- ConditionalARFSampler$new(task)

sample_data <- task$data(rows = 1:5)
sample_data[, .(important1, important2)]
#>    important1  important2
#>         <num>       <num>
#> 1:  0.2875775 0.784575267
#> 2:  0.7883051 0.009429905
#> 3:  0.4089769 0.779065883
#> 4:  0.8830174 0.729390652
#> 5:  0.9404673 0.630131853

Now we’ll conditionally sample the important1 feature given the values of important2 and important3:

sampled_conditional <- arf_sampler$sample_newdata(
    feature = "important1",
    newdata = sample_data,
    conditioning_set = c("important2", "important3")
)

sample_data[, .(important1, important2, important3)]
#>    important1  important2 important3
#>         <num>       <num>      <num>
#> 1:  0.2875775 0.784575267  0.2372297
#> 2:  0.7883051 0.009429905  0.6864904
#> 3:  0.4089769 0.779065883  0.2258184
#> 4:  0.8830174 0.729390652  0.3184946
#> 5:  0.9404673 0.630131853  0.1739838
sampled_conditional[, .(important1, important2, important3)]
#>    important1  important2 important3
#>         <num>       <num>      <num>
#> 1: 0.26312558 0.784575267  0.2372297
#> 2: 0.49047991 0.009429905  0.6864904
#> 3: 0.08545050 0.779065883  0.2258184
#> 4: 0.28119269 0.729390652  0.3184946
#> 5: 0.06460809 0.630131853  0.1739838

This conditional sampling is essential for methods like CFI and RFI that need to preserve feature dependencies. See the perturbation-importance article for detailed comparisons and vignette("feature-samplers") for more details on implemented samplers.

Detailed Scoring Information

All methods store detailed scoring information from each resampling iteration for further analysis. Let’s examine the structure of PFI’s detailed scores:

pfi$scores() |>
    head(10) |>
    knitr::kable(digits = 4, caption = "Detailed PFI scores (first 10 rows)")
Detailed PFI scores (first 10 rows)
feature iter_rsmp iter_repeat regr.mse_baseline regr.mse_post importance
important1 1 1 4.3639 9.1892 4.8253
important2 1 1 4.3639 10.7456 6.3817
important3 1 1 4.3639 5.0741 0.7102
important4 1 1 4.3639 15.5004 11.1365
important5 1 1 4.3639 6.4515 2.0876
unimportant1 1 1 4.3639 4.3224 -0.0415
unimportant2 1 1 4.3639 4.5137 0.1498
unimportant3 1 1 4.3639 4.4032 0.0393
unimportant4 1 1 4.3639 4.2489 -0.1150
unimportant5 1 1 4.3639 4.2589 -0.1050

We can also summarize the scoring structure:

pfi$scores()[, .(
    features = uniqueN(feature),
    resampling_folds = uniqueN(iter_rsmp),
    permutation_iters = uniqueN(iter_repeat),
    total_scores = .N
)]
#>    features resampling_folds permutation_iters total_scores
#>       <int>            <int>             <int>        <int>
#> 1:       10                3                 1           30

So $importance() always gives us the aggregated importances across multiple resampling- and permutation-/refitting iterations, whereas $scores() gives you the individual scores as calculated by the supplied measures and the corresponding importance calculated from the difference of these scores by default.

Analogously to $importance(), you can also use relation = "ratio" here:

pfi$scores(relation = "ratio") |>
    head(10) |>
    knitr::kable(digits = 4, caption = "PFI scores using the ratio (first 10 rows)")
PFI scores using the ratio (first 10 rows)
feature iter_rsmp iter_repeat regr.mse_baseline regr.mse_post importance
important1 1 1 4.3639 9.1892 2.1057
important2 1 1 4.3639 10.7456 2.4624
important3 1 1 4.3639 5.0741 1.1627
important4 1 1 4.3639 15.5004 3.5520
important5 1 1 4.3639 6.4515 1.4784
unimportant1 1 1 4.3639 4.3224 0.9905
unimportant2 1 1 4.3639 4.5137 1.0343
unimportant3 1 1 4.3639 4.4032 1.0090
unimportant4 1 1 4.3639 4.2489 0.9736
unimportant5 1 1 4.3639 4.2589 0.9759

Observation-wise losses and importances

For methods where importances are calculated based on observation-level comparisons and with decomposable measures, we can also retrieve observation-level information with $obs_loss(), which works analogously to $scores() and $importances() but even more detailed:

pfi$obs_loss()
#>            feature iter_rsmp iter_repeat row_ids loss_baseline   loss_post
#>             <char>     <int>       <int>   <int>         <num>       <num>
#>    1:   important1         1           1       1    4.27303383  0.93888898
#>    2:   important1         1           1       9    0.94121187  0.11011124
#>    3:   important1         1           1      11    0.01545140 16.00963101
#>    4:   important1         1           1      12    0.13814810  0.29308440
#>    5:   important1         1           1      15   11.87605546 46.83609001
#>   ---                                                                     
#> 2996: unimportant5         3           1     290    7.97305649  8.01220503
#> 2997: unimportant5         3           1     294    1.06406580  0.92691286
#> 2998: unimportant5         3           1     295   10.92947732 10.76761731
#> 2999: unimportant5         3           1     296    0.03164589  0.08982416
#> 3000: unimportant5         3           1     298   14.52226394 14.52226394
#>       obs_importance
#>                <num>
#>    1:    -3.33414486
#>    2:    -0.83110062
#>    3:    15.99417961
#>    4:     0.15493629
#>    5:    34.96003455
#>   ---               
#> 2996:     0.03914854
#> 2997:    -0.13715294
#> 2998:    -0.16186002
#> 2999:     0.05817826
#> 3000:     0.00000000

Since we computed PFI using the mean squared error (msr("regr.mse")), we can use the associated Measure$obs_loss(), the squared error.
In the resulting table we see

Note that not all measures have a Measure$obs_loss(): Some measures like msr("classif.auc") are not decomposable, so observation-wise loss values are not available.
In other cases, the corresponding obs_loss() is just not yet implemented in mlr3measures, but will likely be in the future.

Parallelization

Both PFI/CFI/RFI and LOCO/WVIM support parallel execution to speed up computation when working with multiple features or expensive learners. The parallelization follows mlr3’s approach, allowing users to choose between mirai and future backends.

Example with future

The future package provides a simple interface for parallel and distributed computing:

library(future)
plan("multisession", workers = 2)

# PFI with parallelization across features
pfi_parallel = PFI$new(
    task,
    learner = lrn("regr.ranger"),
    measure = msr("regr.mse"),
    n_repeats = 10
)
pfi_parallel$compute()
pfi_parallel$importance()

# LOCO with parallelization (uses mlr3fselect internally)
loco_parallel = LOCO$new(
    task,
    learner = lrn("regr.ranger"),
    measure = msr("regr.mse")
)
loco_parallel$compute()
loco_parallel$importance()

Example with mirai

The mirai package offers a modern alternative for parallel computing:

library(mirai)
daemons(n = 2)

# Same PFI/LOCO code works with mirai backend
pfi_parallel = PFI$new(
    task,
    learner = lrn("regr.ranger"),
    measure = msr("regr.mse"),
    n_repeats = 10
)
pfi_parallel$compute()
pfi_parallel$importance()

# Clean up daemons when done
daemons(0)

mirror server hosted at Truenetwork, Russian Federation.