Approximate Bayesian regularization using Gaussian approximations. The input is a vector of estimates and a Gaussian error covariance matrix of the key parameters. Bayesian shrinkage is then applied to obtain parsimonious solutions. The method is described on Karimova, van Erp, Leenders, and Mulder (2024) <doi:10.31234/osf.io/2g8qm>. Gibbs samplers are used for model fitting. The shrinkage priors that are supported are Gaussian (ridge) priors, Laplace (lasso) priors (Park and Casella, 2008 <doi:10.1198/016214508000000337>), and horseshoe priors (Carvalho, et al., 2010; <doi:10.1093/biomet/asq017>). These priors include an option for grouped regularization of different subsets of parameters (Meier et al., 2008; <doi:10.1111/j.1467-9868.2007.00627.x>). F priors are used for the penalty parameters lambda^2 (Mulder and Pericchi, 2018 <doi:10.1214/17-BA1092>). This correspond to half-Cauchy priors on lambda (Carvalho, Polson, Scott, 2010 <doi:10.1093/biomet/asq017>).
Documentation: Downloads: Linking:Please use the canonical form https://CRAN.R-project.org/package=shrinkem to link to this page.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4