Tools for building reinforcement learning (RL) models specifically tailored for Two-Alternative Forced Choice (TAFC) tasks, commonly employed in psychological research. These models build upon the foundational principles of model-free reinforcement learning detailed in Sutton and Barto (2018) <ISBN:9780262039246>. The package allows for the intuitive definition of RL models using simple if-else statements. Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) <doi:10.7554/eLife.49547>. Example datasets included with the package are sourced from the work of Mason et al. (2024) <doi:10.3758/s13423-023-02415-x>.
Version: 0.9.0 Depends: R (≥ 4.0.0) Imports: future, doFuture, foreach, doRNG, progressr Suggests: stats, GenSA, GA, DEoptim, pso, mlrMBO, mlr, ParamHelpers, smoof, lhs, DiceKriging, rgenoud, cmaes, nloptr Published: 2025-07-08 DOI: 10.32614/CRAN.package.binaryRL Author: YuKi [aut, cre] Maintainer: YuKi <hmz1969a at gmail.com> BugReports: https://github.com/yuki-961004/binaryRL/issues License: GPL-3 URL: https://yuki-961004.github.io/binaryRL/ NeedsCompilation: no CRAN checks: binaryRL results Documentation: Downloads: Linking:Please use the canonical form https://CRAN.R-project.org/package=binaryRL to link to this page.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4