mirror of
https://github.com/gsi-upm/sitc
synced 2024-11-21 22:12:30 +00:00
444 lines
16 KiB
Plaintext
444 lines
16 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"![](images/EscUpmPolit_p.gif \"UPM\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Course Notes for Learning Intelligent Systems"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2018 Carlos A. Iglesias"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## [Introduction to Machine Learning V](2_6_0_Intro_RL.ipynb)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Table of Contents\n",
|
||
"\n",
|
||
"* [Introduction](#Introduction)\n",
|
||
"* [Getting started with OpenAI Gym](#Getting-started-with-OpenAI-Gym)\n",
|
||
"* [The Frozen Lake scenario](#The-Frozen-Lake-scenario)\n",
|
||
"* [Q-Learning with the Frozen Lake scenario](#Q-Learning-with-the-Frozen-Lake-scenario)\n",
|
||
"* [Exercises](#Exercises)\n",
|
||
"* [Optional exercises](#Optional-exercises)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Introduction\n",
|
||
"The purpose of this practice is to understand better Reinforcement Learning (RL) and, in particular, Q-Learning.\n",
|
||
"\n",
|
||
"We are going to use [OpenAI Gym](https://gym.openai.com/). OpenAI is a toolkit for developing and comparing RL algorithms.Take a loot at ther [website](https://gym.openai.com/).\n",
|
||
"\n",
|
||
"It implements [algorithm imitation](http://gym.openai.com/envs/#algorithmic), [classic control problems](http://gym.openai.com/envs/#classic_control), [Atari games](http://gym.openai.com/envs/#atari), [Box2D continuous control](http://gym.openai.com/envs/#box2d), [robotics with MuJoCo, Multi-Joint dynamics with Contact](http://gym.openai.com/envs/#mujoco), and [simple text based environments](http://gym.openai.com/envs/#toy_text).\n",
|
||
"\n",
|
||
"This notebook is based on * [Diving deeper into Reinforcement Learning with Q-Learning](https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe).\n",
|
||
"\n",
|
||
"First of all, install the OpenAI Gym library:\n",
|
||
"\n",
|
||
"```console\n",
|
||
"foo@bar:~$ pip install gym\n",
|
||
"```\n",
|
||
"\n",
|
||
"\n",
|
||
"If you get the error message 'NotImplementedError: abstract', [execute](https://github.com/openai/gym/issues/775) \n",
|
||
"```console\n",
|
||
"foo@bar:~$ pip install pyglet==1.2.4\n",
|
||
"```\n",
|
||
"\n",
|
||
"If you want to try the Atari environment, it is better that you opt for the full installation from the source. Follow the instructions at [https://github.com/openai/gym#id15](OpenGym).\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Getting started with OpenAI Gym\n",
|
||
"\n",
|
||
"First of all, read the [introduction](http://gym.openai.com/docs/#getting-started-with-gym) of OpenAI Gym."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Environments\n",
|
||
"OpenGym provides a number of problems called *environments*. \n",
|
||
"\n",
|
||
"Try the 'CartPole-v0' (or 'MountainCar)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import gym\n",
|
||
"\n",
|
||
"env = gym.make('CartPole-v0')\n",
|
||
"#env = gym.make('MountainCar-v0')\n",
|
||
"#env = gym.make('Taxi-v2')\n",
|
||
"\n",
|
||
"#env = gym.make('Jamesbond-ram-v0')\n",
|
||
"\n",
|
||
"env.reset()\n",
|
||
"for _ in range(1000):\n",
|
||
" env.render()\n",
|
||
" env.step(env.action_space.sample()) # take a random action"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This will launch an external window with the game. If you cannot close that window, just execute in a code cell:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"env.close()\n",
|
||
"```\n",
|
||
"\n",
|
||
"The full list of available environments can be found printing the environment registry as follows."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from gym import envs\n",
|
||
"print(envs.registry.all())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The environment’s **step** function returns four values. These are:\n",
|
||
"\n",
|
||
"* **observation (object):** an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.\n",
|
||
"* **reward (float):** amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.\n",
|
||
"* **done (boolean):** whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.).\n",
|
||
"* **info (dict):** diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment’s last state change). However, official evaluations of your agent are not allowed to use this for learning.\n",
|
||
"\n",
|
||
"The typical agent loop consists in first calling the method *reset* which provides an initial observation. Then the agent executes an action, and receives the reward, the new observation, and if the episode has finished (done is true). \n",
|
||
"\n",
|
||
"For example, analyze this sample of agent loop for 100 ms. The details of the previous variables for this game as described [here](https://github.com/openai/gym/wiki/CartPole-v0) are:\n",
|
||
"* **observation**: Cart Position, Cart Velocity, Pole Angle, Pole Velocity.\n",
|
||
"* **action**: 0\t(Push cart to the left), 1\t(Push cart to the right).\n",
|
||
"* **reward**: 1 for every step taken, including the termination step."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import gym\n",
|
||
"env = gym.make('CartPole-v0')\n",
|
||
"for i_episode in range(20):\n",
|
||
" observation = env.reset()\n",
|
||
" for t in range(100):\n",
|
||
" env.render()\n",
|
||
" print(observation)\n",
|
||
" action = env.action_space.sample()\n",
|
||
" print(\"Action \", action)\n",
|
||
" observation, reward, done, info = env.step(action)\n",
|
||
" print(\"Observation \", observation, \", reward \", reward, \", done \", done, \", info \" , info)\n",
|
||
" if done:\n",
|
||
" print(\"Episode finished after {} timesteps\".format(t+1))\n",
|
||
" break"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# The Frozen Lake scenario\n",
|
||
"We are going to play to the [Frozen Lake](http://gym.openai.com/envs/FrozenLake-v0/) game.\n",
|
||
"\n",
|
||
"The problem is a grid where you should go from the 'start' (S) position to the 'goal position (G) (the pizza!). You can only walk through the 'frozen tiles' (F). Unfortunately, you can fall in a 'hole' (H).\n",
|
||
"![](images/frozenlake-problem.png \"Frozen lake problem\")\n",
|
||
"\n",
|
||
"The episode ends when you reach the goal or fall in a hole. You receive a reward of 1 if you reach the goal, and zero otherwise. The possible actions are going left, right, up or down. However, the ice is slippery, so you won't always move in the direction you intend.\n",
|
||
"\n",
|
||
"![](images/frozenlake-world.png \"Frozen lake world\")\n",
|
||
"\n",
|
||
"\n",
|
||
"Here you can see several episodes. A full recording is available at [Frozen World](http://gym.openai.com/envs/FrozenLake-v0/).\n",
|
||
"\n",
|
||
"![](images/recording.gif \"Example running\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Q-Learning with the Frozen Lake scenario\n",
|
||
"We are now going to apply Q-Learning for the Frozen Lake scenario. This part of the notebook is taken from [here](https://github.com/simoninithomas/Deep_reinforcement_learning_Course/blob/master/Q%20learning/Q%20Learning%20with%20FrozenLake.ipynb).\n",
|
||
"\n",
|
||
"First we create the environment and a Q-table inizializated with zeros to store the value of each action in a given state. "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import numpy as np\n",
|
||
"import gym\n",
|
||
"import random\n",
|
||
"\n",
|
||
"env = gym.make(\"FrozenLake-v0\")\n",
|
||
"\n",
|
||
"\n",
|
||
"action_size = env.action_space.n\n",
|
||
"state_size = env.observation_space.n\n",
|
||
"\n",
|
||
"\n",
|
||
"qtable = np.zeros((state_size, action_size))\n",
|
||
"print(qtable)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we define the hyperparameters."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Q-Learning hyperparameters\n",
|
||
"total_episodes = 10000 # Total episodes\n",
|
||
"learning_rate = 0.8 # Learning rate\n",
|
||
"max_steps = 99 # Max steps per episode\n",
|
||
"gamma = 0.95 # Discounting rate\n",
|
||
"\n",
|
||
"# Exploration hyperparameters\n",
|
||
"epsilon = 1.0 # Exploration rate\n",
|
||
"max_epsilon = 1.0 # Exploration probability at start\n",
|
||
"min_epsilon = 0.01 # Minimum exploration probability \n",
|
||
"decay_rate = 0.01 # Exponential decay rate for exploration prob"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"And now we implement the Q-Learning algorithm.\n",
|
||
"\n",
|
||
"![](images/qlearning-algo.png \"Q-Learning algorithm\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# List of rewards\n",
|
||
"rewards = []\n",
|
||
"\n",
|
||
"# 2 For life or until learning is stopped\n",
|
||
"for episode in range(total_episodes):\n",
|
||
" # Reset the environment\n",
|
||
" state = env.reset()\n",
|
||
" step = 0\n",
|
||
" done = False\n",
|
||
" total_rewards = 0\n",
|
||
" \n",
|
||
" for step in range(max_steps):\n",
|
||
" # 3. Choose an action a in the current world state (s)\n",
|
||
" ## First we randomize a number\n",
|
||
" exp_exp_tradeoff = random.uniform(0, 1)\n",
|
||
" \n",
|
||
" ## If this number > greater than epsilon --> exploitation (taking the biggest Q value for this state)\n",
|
||
" if exp_exp_tradeoff > epsilon:\n",
|
||
" action = np.argmax(qtable[state,:])\n",
|
||
"\n",
|
||
" # Else doing a random choice --> exploration\n",
|
||
" else:\n",
|
||
" action = env.action_space.sample()\n",
|
||
"\n",
|
||
" # Take the action (a) and observe the outcome state(s') and reward (r)\n",
|
||
" new_state, reward, done, info = env.step(action)\n",
|
||
"\n",
|
||
" # Update Q(s,a):= Q(s,a) + lr [R(s,a) + gamma * max Q(s',a') - Q(s,a)]\n",
|
||
" # qtable[new_state,:] : all the actions we can take from new state\n",
|
||
" qtable[state, action] = qtable[state, action] + learning_rate * (reward + gamma * np.max(qtable[new_state, :]) - qtable[state, action])\n",
|
||
" \n",
|
||
" total_rewards += reward\n",
|
||
" \n",
|
||
" # Our new state is state\n",
|
||
" state = new_state\n",
|
||
" \n",
|
||
" # If done (if we're dead) : finish episode\n",
|
||
" if done == True: \n",
|
||
" break\n",
|
||
" \n",
|
||
" episode += 1\n",
|
||
" # Reduce epsilon (because we need less and less exploration)\n",
|
||
" epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-decay_rate*episode) \n",
|
||
" rewards.append(total_rewards)\n",
|
||
"\n",
|
||
"print (\"Score over time: \" + str(sum(rewards)/total_episodes))\n",
|
||
"print(qtable)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Finally, we use the learnt Q-table for playing the Frozen World game."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"\n",
|
||
"env.reset()\n",
|
||
"\n",
|
||
"for episode in range(5):\n",
|
||
" state = env.reset()\n",
|
||
" step = 0\n",
|
||
" done = False\n",
|
||
" print(\"****************************************************\")\n",
|
||
" print(\"EPISODE \", episode)\n",
|
||
"\n",
|
||
" for step in range(max_steps):\n",
|
||
" env.render()\n",
|
||
" # Take the action (index) that have the maximum expected future reward given that state\n",
|
||
" action = np.argmax(qtable[state,:])\n",
|
||
" \n",
|
||
" new_state, reward, done, info = env.step(action)\n",
|
||
" \n",
|
||
" if done:\n",
|
||
" break\n",
|
||
" state = new_state\n",
|
||
"env.close()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Exercises\n",
|
||
"\n",
|
||
"## Taxi\n",
|
||
"Analyze the [Taxi problem](http://gym.openai.com/envs/Taxi-v2/) and solve it applying Q-Learning. You can find a solution as the one previously presented [here](https://www.oreilly.com/learning/introduction-to-reinforcement-learning-and-openai-gym).\n",
|
||
"\n",
|
||
"Analyze the impact of not changing the learning rate (alfa or epsilon, depending on the book) or changing it in a different way."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Optional exercises\n",
|
||
"\n",
|
||
"## Doom\n",
|
||
"Read this [article](https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8) and execute the companion [notebook](https://github.com/simoninithomas/Deep_reinforcement_learning_Course/blob/master/Deep%20Q%20Learning/Doom/Deep%20Q%20learning%20with%20Doom.ipynb). Analyze the results and provide conclusions about DQN."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## References\n",
|
||
"* [Diving deeper into Reinforcement Learning with Q-Learning, Thomas Simonini](https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe).\n",
|
||
"* Illustrations by [Thomas Simonini](https://github.com/simoninithomas/Deep_reinforcement_learning_Course) and [Sung Kim](https://www.youtube.com/watch?v=xgoO54qN4lY).\n",
|
||
"* [Frozen Lake solution with TensorFlow](https://analyticsindiamag.com/openai-gym-frozen-lake-beginners-guide-reinforcement-learning/)\n",
|
||
"* [Deep Q-Learning for Doom](https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8)\n",
|
||
"* [Intro OpenAI Gym with Random Search and the Cart Pole scenario](http://www.pinchofintelligence.com/getting-started-openai-gym/)\n",
|
||
"* [Q-Learning for the Taxi scenario](https://www.oreilly.com/learning/introduction-to-reinforcement-learning-and-openai-gym)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Licence"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
|
||
"\n",
|
||
"© 2018 Carlos A. Iglesias, Universidad Politécnica de Madrid."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.5.5"
|
||
},
|
||
"latex_envs": {
|
||
"LaTeX_envs_menu_present": true,
|
||
"autocomplete": true,
|
||
"bibliofile": "biblio.bib",
|
||
"cite_by": "apalike",
|
||
"current_citInitial": 1,
|
||
"eqLabelWithNumbers": true,
|
||
"eqNumInitial": 1,
|
||
"hotkeys": {
|
||
"equation": "Ctrl-E",
|
||
"itemize": "Ctrl-I"
|
||
},
|
||
"labels_anchors": false,
|
||
"latex_user_defs": false,
|
||
"report_style_numbering": false,
|
||
"user_envs_cfg": false
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 1
|
||
}
|