Update README.md
This commit is contained in:
parent
ba429c8827
commit
d995b315e1
14
README.md
14
README.md
@ -1,13 +1,7 @@
|
||||
# gym-random-walk
|
||||
# gym-tic-tac-toe
|
||||
|
||||
A minimal example of a custom environment for https://github.com/openai/gym.
|
||||
An example of a custom environment for https://github.com/openai/gym.
|
||||
|
||||
What the environment provides is not that important; this is meant to show how what you need to do to create your own environments for openai/gym.
|
||||
I want to try out self-play in a Reinforcement Learning context. Rather than the board game environments on openai/gym right now, which are "single-player" by providing a built-in opponent, I want to create an agent that learns a strategy by playing against itself, so it will try to maximize the reward for "player 1" and minimize it for "player 2".
|
||||
|
||||
For concreteness I used an example in the recordings of David Silver's lectures on Reinforcement Learning at UCL.
|
||||
|
||||
(0) - A - B - C - D - E - (+1)
|
||||
|
||||
You start off at one of the positions A to E, you can move right or left, reaching the "+1" terminal state gives you a reward of +1, and going all the way "to the left" will give you a terminal reward of 0.
|
||||
|
||||
Instead of calling them the above, I just made them the states 0, 1, ...6.
|
||||
The canonical example of a simple two player game is Tic Tac Toe, also known as Noughts and Crosses.
|
||||
|
Loading…
Reference in New Issue
Block a user