GitXplorerGitXplorer
a

Tennis-Udacity-Deep-Reinforcement-Learning

public
0 stars
0 forks
0 issues

Commits

List of commits on branch master.
Verified
61e1b91d35461fb1e8f1fd9e4e8e170574ad9b3e

Uploaded Report

aabheesht17 committed 5 years ago
Verified
2dd6f9297e4e2a35d8d796e6ed40521e81fbd3d3

Uploaded the report

aabheesht17 committed 5 years ago
Verified
149b1c2389232141bc421f5c78517f730f436821

Uploaded notebook

aabheesht17 committed 5 years ago
Verified
f1132d3f6f9432844ecb221d786d17f473096343

Uploaded trained model weights

aabheesht17 committed 5 years ago
Verified
74459443177b93ae99fa7d3cd071b545f92cabeb

Updated README

aabheesht17 committed 5 years ago
Verified
fc26b864018f846f5804dc997149e28873533a2d

Delete README.md

aabheesht17 committed 5 years ago

README

The README file for this repository.

Project 3: Collaboration and Competition

Introduction

For this project, you will work with the Tennis environment.

Trained Agent

In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.

The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.

The task is episodic, and in order to solve the environment, your agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents). Specifically,

  • After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 2 (potentially different) scores. We then take the maximum of these 2 scores.
  • This yields a single score for each episode.

The environment is considered solved, when the average (over 100 episodes) of those scores is at least +0.5.

Getting Started

  1. Download the environment from one of the links below. You need only select the environment that matches your operating system:

    (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

    (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)

  2. Place the file in the DRLND GitHub repository, in the p3_collab-compet/ folder, and unzip (or decompress) the file.

Instructions

Follow the instructions in Tennis.ipynb to get started with training your own agent!