Implement semi-deterministic sampling of coalitions similar to the default in the shap python library and described and discussed as the PySHAP* strategy in Olsen & Jullum (2024). It is disabled by default, but can be set via extra_computation_args = list(semi_deterministic_sampling = TRUE)
in explain()
. The functionality is available when paired coalition sampling (the default) is enabled. See #449 for details.
Deletes the regression-surrogate parsnip object when testing to avoid future conflicts with model object changes. (Second last commit in #447).
Improve and update the logic and print for setting the number of coalitions in the next iteration for iterative = TRUE
(#452)
Allow passing vS_batching_method
to explain()/explain_forecast()
to specify the batch computation method (default is "future"
for both, "forloop"
available mainly for dev purposes) (#452)
Transform to use the cli
and rlang
packages to provide all messages/warnings/stops with nicer formatting and layout. The messages (via cli_inform()
) now also obey the verbose
argument and are displayed only if 'basic' %in% verbose
is TRUE. The header printout also differs between explain()
/explain_forecast()
and whether called from Python. This also adds cli
and rlang
to imports. (#453)
Now using testthat::skip_if_not_installed
for all tests requiring suggested packages to ensure they are skipped gracefully when dependencies are unavailable (#451)
KernelSHAP_reweighing()
(#448)seed
argument, and only pass to torch
if not NULL (#452)explain_forecast()
use future
for batch computation as well (by default) (#452)approach = 'empirical'
occurring when n_features < n_explain
(#453)Fix documentation issues detected during shapr 1.0.2 release (#442)
print()
by warning()
on two occasionsFix issue with Expected <nn_module> but got object of type <NULL>
for approach='vaeac'
after recent torch
update broke it (#444)
Changes default seed in explain()
and explain_forecast()
from 1 to NULL to avoid set.seed() to conflict with later called code (#445)
Other minor fixes
expect_snapshot_rds()
to reduce false positive roundoff-errors between platforms (#444)explain_forecast()
(#433)by=.I
(#434)paired_shap_sampling
and kernelSHAP_reweighting
into extra_computation_args
(#428)iterative = TRUE
for explain_forecast()
which was not using coalitions from previous iterations (#426)verbose
argument for explain_forecast()
(#425)party
package returns a constparty
object (#423)keep_samp_for_vS
with iterative approach (#417)explain()
in R (#416)shapr()
for initial setup + explain()
for explanation for specific observations), to a single function call (also named explain()
). The data used for training and to be explained have gotten explicit names (x_train
and x_explain
). The order of the input arguments has also been slightly changed (model
is now the first argument).explain()
instead of being defined as functions of a specific class in the global env.make_dummies
used to explain xgboost
models with categorical data, is removed to simplify the code base. This is rather handled with a custom prediction model.explain.ctree_comb_mincrit
, which allowed combining models with approch=ctree
with different mincrit
parameters, has been removed to simplify the code base. It may return in a completely general manner in later version of shapr
.shaprpyr
, #325) for explaining predictions from Python models (from Python) utilizing almost all functionality of shapr
. The wrapper moves back and forth back and forth between Python and R, doing the prediction in Python, and almost everything else in R. This simplifies maintenance of shaprpy
significantly. The wrapper is available here.progressr
package. Must be activated by the user with progressr::handlers(global = TRUE)
or wrapping the call to explain()
around progressr::with_progress({})
approach = 'categorical'
(#256, #307) used to explain models with solely categorical features by directly using/estimating the joint distribution of all feature combinations.approch='timeseries'
(#314) for explaining classifications based on time series data/models with the method described in Sec 4.3 of the groupShapley paper.explain_forecast
to explain forecasts from time series models, at various prediction horizons (#328). Uses a different set of input argument which is more appropriate for these models.approach = 'independence'
method providing significantly faster computation (no longer as a special case of the empirical
method). Also allow the method to be used on models with categorical data (#315).explain
, also using vdiffr for plot tests. Test functions are only written for exported core functions. Internal functions are only tested through the exported ones.datasets::airquality
dataset. This avoids including a new package just for the dataset (#248).shapr(data[,1:5],model...)
attach()
: Fixed by changing how we simulate adding a function to .GlobalEnv in the failing test. Actual package not affected.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4