Deep Reinforcement Learning for Keras.
keras-rl
implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras.
Furthermore, keras-rl
works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy.
Of course you can extend keras-rl
according to your own needs. You can use built-in Keras callbacks and metrics or define your own.
Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. Documentation is available online.
As of today, the following algorithms have been implemented:
You can find more information on each agent in the doc.
pip install keras-rl
git clone https://github.com/keras-rl/keras-rl.git
cd keras-rl
python setup.py install
If you want to run the examples, you’ll also have to install:
pip install h5py
For atari example you will also need:
pip install Pillow
pip install gym[atari]
Once you have installed everything, you can try out a simple example:
python examples/dqn_cartpole.py
This is a very simple example and it should converge relatively quickly, so it’s a great way to get started!
It also visualizes the game during training, so you can watch it learn. How cool is that?
Some sample weights are available on keras-rl-weights.
If you have questions or problems, please file an issue or, even better, fix the problem yourself and submit a pull request!
You’re using Keras-RL on a project? Open a PR and share it!
To see graphs of your training progress and compare across runs, run pip install wandb
and add the WandbLogger callback to your agent’s fit()
call:
from rl.callbacks import WandbLogger
...
agent.fit(env, nb_steps=50000, callbacks=[WandbLogger()])
For more info and options, see the W&B docs.
If you use keras-rl
in your research, you can cite it as follows:
@misc{plappert2016kerasrl,
author = {Matthias Plappert},
title = {keras-rl},
year = {2016},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/keras-rl/keras-rl}},
}