When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <doi:10.48550/arXiv.1602.04938>.
Version: 0.5.3 Imports: glmnet, stats, ggplot2, tools, stringi, Matrix, Rcpp, assertthat, methods, grDevices, gower LinkingTo: Rcpp, RcppEigen Suggests: xgboost, testthat, mlr, h2o, text2vec, MASS, covr, knitr, rmarkdown, sessioninfo, magick, keras, htmlwidgets, shiny, shinythemes, ranger Published: 2022-08-19 DOI: 10.32614/CRAN.package.lime Author: Emil Hvitfeldt [aut, cre], Thomas Lin Pedersen [aut], Michaël Benesty [aut] Maintainer: Emil Hvitfeldt <emilhhvitfeldt at gmail.com> BugReports: https://github.com/thomasp85/lime/issues License: MIT + file LICENSE URL: https://lime.data-imaginist.com, https://github.com/thomasp85/lime NeedsCompilation: yes Materials: README NEWS In views: MachineLearning CRAN checks: lime resultsRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4