$40
CS 234 Assignment 1
For submission instructions please refer to website For all problems, if you use an existing result from either the literature or a textbook to solve the exercise, you need to cite the source.
1 Flappy Karel MDP [25 pts]
There is a hot new mobile game on the market called Flappy Karel, where Karel the robot must dodge the red pillars of doom and flap its way to the green pasture. Consider the following 2 grid environments (Flappy World 1 and Flappy World 2). Starting from any unshaded square, Karel can either move right & up, or right & down (e.g from state 4 you can move to state 10 or 12, think checkers). Actions are deterministic and always succeed unless they will cause Karel to run into a wall. The thicker edges indicate walls, and attempting to move in the direction of a wall results in falling down one square (e.g. going in any direction from state 30 leads to falling into state 31). A successful run by Karel in Flappy World 1 is shown in Figure 1b. Taking any action from the green target squares (no. 32) earns a reward of rg and ends the episode. Taking any action from the red squares of doom (no. 1, 7, 8, 12, 13...) earns a reward of rr and ends the episode. Otherwise, from every other square, taking any action is associated with a reward rs. Assume the discount factor γ = 0.9, rg = +5, and rr = −5 unless otherwise specified. Notice the Horizon is technically infinite in both worlds.
1
8
15
22
29
2
9
16
23
30
3
10
17
24
31
4
11
18
25
32
5
12
19
26
33
6
13
20
27
34
7
14
21
28
35
(a) Flappy World 1 (b) A successful run by Karel in Flappy World 1
Figure 1
(a) Let rs ∈ {−4,−1,0,1}. Starting in square 2, for each of the possible values of rs briefly explain what the optimal policy would be in Flappy World 1. In each case is the optimal policy unique and does the optimal policy depend on the value of the discount factor γ? Explain your answer. [5 pts]
(b) What value of rs would cause the optimal policy to return the shortest path to the green target square? Using this value of rs find the optimal value function for each square in Flappy world 1. What is the optimal action from square 27? [5 pts]
Now consider Flappy world 2. It is the same as Flappy world 1, except there are no walls on the right and left sides. Going past the right end of flappy world 2 simply loops you to left hand side. Take a look at Figure 1b for a successful run by Karel in Flappy World 2.
(b) A successful run by Karel in Flappy World 2
(a) Flappy World 2
Figure 2
(c) Let rs ∈ {−4,−1,0,1}. Starting in square 3, for each of the possible values of rs briefly explain what the optimal policy would be in Flappy World 2. Using the value of rs, that would cause the optimal policy to return the shortest path to the green target square, find the optimal value function for each square in Flappy world 2. What is the optimal action from square 27? [5 pts]
(d) Consider a general MDP with rewards, and transitions. Consider a discount factor of γ. For this case assume that the horizon is infinite (so there is no termination). A policy π in this MDP induces a value function V π (lets refer to this as ). Now suppose we have the same MDP where all rewards have a constant c added to them and then have been scaled by a constant a (i.e. rnew = a(c + rold)). Can you come up with an expression for the new value function V π induced by π in this second MDP in terms of Voldπ ,c,a, and γ? [5 pts]
(e) Can scaling all the rewards by a fixed amount change the optimal policy of a MDP? If so, describe how different ranges of the constant a (where rnew = a ∗ (rold)) would change the optimal policy of the MDP from part (c). [5 pts]
2 Applications of the Performance Difference Lemma [20pts]
The purpose of this exercise is to get familiar on how to compare the value of different policies, π1 and π2, on a fixed horizon MDP. A fixed horizon MDP is an MDP where the agent’s state is reset after H timesteps; H is called the horizon of the MDP. There is no discount (i.e., γ = 1) and policies are allowed to be non-stationary, i.e., the action identified by a policy depends on the timestep in addition to the state. Let xt ∼ π denote the distribution over states at timestep t (for 1 ≤ t ≤ H) upon following policy π and Vtπ(xt) denote the value function of policy π in state xt and timestep t, and Qπt (xt,a) denote the corresponding Q value associated to action a. As a clarifying example, we denote Ext∼π1V (xt) to represent the average value of the value function V (·) over the states at timestep t encountered upon following policy π1. The following equality is called performance
difference lemma :
Intuition: The above expression can be interpreted in the following way. For concreteness, assume that π1 is the better policy, i.e., achieving . Suppose you’re following policy π2 and you are at timestep t in state xt. You have the option to follow π1 (the better policy) until the end of the episode, totalling return from the current state-timestep; or you have the option to follow π2 for one timestep and then follow π1 instead until the end of the episode (you can follow many other policies of course). This would give you a “loss” of Qπt 1(xt,π1(xt,t)) − Qπt 1(xt,π2(xt,t)) that originates from following the worse policy π2 instead of π1 in that timestep. Then the equation above means that the value difference of the two policies is the sum of all the losses induced by following the suboptimal policy for every timestep, weighted by the expected trajectory of the policy you’re following.
Question You will use the performance difference lemma to solve this problem. Consider an MDP where the state space S is partitioned into two sets of states S+ and its complement S+.
In every state s ∈ S+ there exists an action a+ that leads to the same state with probability 1 and gives a unitary reward:
p(st+1 = s | st = s,at = a+) = 1, p(st + 1 6= s | st = s,at = a+) = 0.
The reward function is always positive. In S+ the reward function equals 1 upon playing a+ and H upon playing any action a 6= a+. Therefore in S+
r(s,a+) = 1, r(s,a) = H, a 6= a+
Conversely, in any state s 6∈ S+, the reward function is in [0,1] (∀s 6∈ S+ ∀a r(s,a) ∈ [0,1]).
Consider a policy π and define a policy π+ that takes action a+ in any state S+ and is otherwise equal to π:
π+(s) = a+ if s ∈ S+, π+(s) = π(s) if s ∈ S6+
Intuitively, π accumulates higher return than π+: in any state in S+ the policy π+ chooses to take a unitary reward forever instead of a reward of H and then maybe more. Using the performance difference lemma show that at any state s0
.
3 Nonstationary Discount Factor γ [30 pts]
In this problem you will consider a variable discount factor γ. In lecture 2, we proved that the Bellman backup is a contraction for γ < 1 in the infinity norm.
In this problem we consider having a non-stationary discount factor and assume you want to run K iterations of value iterations. Let VK and VK0 be any two arbitrary initial value functions (at timestep K). The time-dependent Bellman backup operator Bk is defined as
def X 0|s,a)Vk(s0)]
Vk−1 = BkVk = max[R(s,a) + γk p(s
a s0∈S
where
Notice that the value function index is decreasing: K,K − 1,...,2,1
10pt Similarly to what you’ve done in class, show that the Bellman operator with non-stationary discount factor at time step k is still a contraction, i.e.,
kBkV − BkV 0k∞ ≤ γkkV − V 0k∞
10pt Using the above inequality prove that
kB1B2 ···BKVK − B1B2 ···BKVK0 k∞ ≤ γ1γ2 ···γkkVK − VK0 k∞
10pt Unfortunately γk ≈ 1 when k is large so you cannot conclude that the convergence occurs exponentially fast. However, the error still shrinks: show that
which allows you to write
and ensure convergence, albeit at a slower rate.
4 Frozen Lake MDP [25 pts]
Now you will implement value iteration and policy iteration for the Frozen Lake environment from OpenAI Gym. We have provided custom versions of this environment in the starter code.
(a) (coding) Read through vi_and_pi.py and implement policy_evaluation, policy_improvement and policy_iteration. The stopping tolerance (defined as maxs |Vold(s) − Vnew(s)|) is tol = 10−3 . Use γ = 0.9. Return the optimal value function and the optimal policy. [10pts]
(b) (coding) Implement value_iteration in vi_and_pi.py. The stopping tolerance is tol =
10−3 . Use γ = 0.9. Return the optimal value function and the optimal policy. [10 pts]
(c) (written) Run both methods on the Deterministic-4x4-FrozenLake-v0 and
Stochastic-4x4-FrozenLake-v0 environments. In the second environment, the dynamics of the world are stochastic. How does stochasticity affect the number of iterations required, and the
resulting policy? [5 pts]