A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/ramp-kits/rl_simulator below:

ramp-kits/rl_simulator: Model-based reinforcement learning (generative simulator models and planning agents)

Model-based reinforcement learning

This repository contains a benchmark of model-based reinforcement learning solutions made of probabilistic models and planning agents. This benchmark was used to run the experiments of the paper "Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?", Balázs Kégl, Gabriel Hurtado, Albert Thomas, ICLR 2021. You can also check the associated blog post for the general context and a summary of this paper.

The different systems of the benchmark are located in the benchmark/ folder. Each system has its own folder where one can

You can easily install all the required packages with conda and the following procedure:

  1. Create a new conda environment from environment.yml using conda >= 4.9.2:
conda env create -f environment.yml

By default this will create an environment named mbrl. You can specify the name of your choice by adding -n <environment_name> to the conda env create command.

  1. Activate the environment with conda activate mbrl.

  2. Install the generative regression branch of ramp-workflow by running

pip install git+https://github.com/paris-saclay-cds/ramp-workflow.git@generative_regression_clean
  1. Install the mbrl-tools package by running pip install . in the mbrl-tools/ directory.

With this installation you can run all the models of the ICLR 2021 paper. If you do not want to run all the models you might only need a subset of the packages listed in environment.yml.

Finally, if you want to run the inverted pendulum experiments you need MuJoCo 2.0 and mujoco-py. mujoco-py can be installed easily with pip install mujoco-py.

We will go through the different functionalities using the acrobot system located in benchmark/acrobot/. The main structure of this folder is based on the one required by ramp-workflow with a few additional components for the dynamic evaluation (model-based reinforcement learning loop):

To train and evaluate a model located in submissions/ on a static dataset run ramp-test --submission <submission_name> --data-label <dataset_name>. For instance to run the linear model on the dataset generated with a random policy:

ramp-test --submission arlin_sigma --data-label random

For more information on the ramp-test options and generated outputs please refer to the ramp-workflow documentation.

To evaluate a model, coupled with a random shooting agent, in a model-based reinforcement learning setup use the model-based-rl command. For instance to evaluate the linear model you can run

model-based-rl --submission arlin_sigma --agent-name random_shooting

The --submission option name was inherited from the terminology used by ramp-test. Other options include the number of epochs, the minimum number of steps per epoch, using an initial trace instead of running a random agent for the first epoch. More information on the different options can be obtained by running model-based-rl --help.

If you use this code please cite our ICLR 2021 paper:

@inproceedings{Kegl2021,
  title={Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?},
  author={Kégl, Balázs and Hurtado, Gabriel and Thomas, Albert},
  booktitle={9th International Conference on Learning Representations, {ICLR} 2021},
  year={2021}
}

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4