GitXplorerGitXplorer
u

anyrl-py

public
154 stars
22 forks
10 issues

Commits

List of commits on branch master.
Unverified
953ad68d6507b83583e342b3210ed98e03a86a4f

use simpler super() syntax

uunixpickle committed 6 years ago
Unverified
707c57b0233f9d35170517a7d5650e2fb0790d02

tests for MPIOptimizer

uunixpickle committed 6 years ago
Unverified
a8ad2fafb3817bfda847f763ed1edbde25059c44

fix MPIOptimizer for sparse gradients

uunixpickle committed 6 years ago
Unverified
0a1798032caa98f926fa5399eec8a401ac65dc8c

fix attribute bug on newer versions of TF

uunixpickle committed 6 years ago
Unverified
a7e464d6917ece98438de0dcfdc9ecc020e597c1

bump version to 0.12.18

uunixpickle committed 6 years ago
Unverified
2772eee1e8b1608d02a5ad56de0b6444a40e2054

workaround https://github.com/tensorflow/tensorflow/issues/21856

uunixpickle committed 6 years ago

README

The README file for this repository.

anyrl-py

This is a Python remake (and makeover) of anyrl. It is a general-purpose library for Reinforcement Learning which aims to be as modular as possible.

Installation

You can install anyrl with pip:

pip install anyrl

APIs

There are several different sub-modules in anyrl:

  • models: abstractions and concrete implementations of RL models. This includes actor-critic RNNs, MLPs, CNNs, etc. Takes care of sequence padding, BPTT, etc.
  • envs: APIs for dealing with environments, including wrappers and asynchronous environments.
  • rollouts: APIs for gathering and manipulating batches of episodes or partial episodes. Many RL algorithms include a "gather trajectories" step, and this sub-module fulfills that role.
  • algos: well-known learning algorithms like policy gradients or PPO. Also includes mini-algorithms like Generalized Advantage Estimation.
  • spaces: tools for using action and observation spaces. Includes parameterized probability distributions for implementing stochastic policies.

Motivation

Currently, most RL code out there is very restricted and not properly decoupled. In contrast, anyrl aims to be extremely modular and flexible. The goal is to decouple agents, learning algorithms, trajectories, and things like GAE.

For example, anyrl decouples rollouts from the learning algorithm (when possible). This way, you can gather rollouts in several different ways and still feed the results into one learning algorithm. Further, and more obviously, you don't have to rewrite rollout code for every new RL algorithm you implement. However, algorithms like A3C and Evolution Strategies may have specific ways of performing rollouts that can't rely on the rollout API.

Use of TensorFlow

This project relies on TensorFlow for models and training algorithms. However, anyrl APIs are framework-agnostic when possible. For example, the rollout API can be used with any policy, whether it's a TensorFlow neural network or a native-Python decision forest.

Style

I use autopep8 and flake8. Here is the command you can use to run autopep8:

autopep8 --recursive --in-place --max-line-length 100 .

I recommend the following flag for flake8: --max-line-length=100