Header

Sleepy-robots.org

Where robots dream of electric sheep...


Monday, August 28, 2017

An alternative to the MuJoCo based OpenAI gyms: The pybullet environment for use with the OpenAI Gym Reinforcement Learning Research Platform

OpenAI gym is currently one of the most widely used toolkits for developing and comparing reinforcement learning algorithms. Unfortunately, for several challenging continuous control environments it requires the user to install MuJoCo, a commercial physics engine which requires a license to run for longer than 30 days. Such a commercial barrier hinders open research, especially in the perspective that other appropriate physics engines exist. To satisfy the large request by the community, we provide alternative implementations of the original MuJoCo environments which can be used free of charge. The environments have been reimplemented using BulletPhysics' python wrapper pybullet, such that they seamlessly integrate into the OpenAI gym framework. In order to show the usability of the new environments, several RL agents from the Keras-RL are configured to be trained out of the box. To further simplify the training of agents, a Trainer class was implemented which helps to capture commandline arguments in a unified fashion. The Trainer provides a set of standard arguments, but additional arguments can be defined by the agent and the environment to enable the researcher to provide special parameters to either one.

To use the environments from pybullet, install pybullet using pip in version 1.2.6 or higher.

pip install pybullet


The following things can be done using pybyllet.


  • You can enjoy pretrained environments:
python -m pybullet_envs.examples.enjoy_TF_AntBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_HalfCheetahBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_AntBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_HopperBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_HumanoidBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_InvertedDoublePendulumBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_InvertedPendulumBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_InvertedPendulumSwingupBulletEnv_v0_2017may
python -m pybullet_envs.examples.enjoy_TF_Walker2DBulletEnv_v0_2017may


  • Run some gym environment test:

python -m pybullet_envs.examples.racecarGymEnvTest


  • Train an agent based on OpenAI baselines DQN:


train:
python -m pybullet_envs.examples.train_pybullet_cartpole
python -m pybullet_envs.examples.train_pybullet_racecar

(the training will save a .pkl file with the weights, leave it running for a while, it terminates when it reaches some reasonable reward)

enjoy:
python -m pybullet_envs.examples.enjoy_pybullet_cartpole
python -m pybullet_envs.examples.enjoy_pybullet_racecar

For your own learning/training, create/import a specific Gym environment:

>python
import pybullet_envs
env = gym.make("AntBulletEnv-v0")
env = gym.make("HalfCheetahBulletEnv-v0")
env = gym.make("HopperBulletEnv-v0")
env = gym.make("HumanoidBulletEnv-v0")
env = gym.make("Walker2DBulletEnv-v0")
env = gym.make("InvertedDoublePendulumBulletEnv-v0")
env = gym.make("InvertedPendulumBulletEnv-v0")
env = gym.make("MinitaurBulletEnv-v0")
env = gym.make("RacecarBulletEnv-v0")
env = gym.make("KukaBulletEnv-v0")
env = gym.make("CartPoleBulletEnv-v0")

If you want to enable human/GUI rendering in a Gym-created environment, call the env.render(mode="human") BEFORE the first env.reset.

No comments:

Post a Comment