# Project 2

## Reinforcement Learning

*Due Date: by 5 pm on Friday, November 8th.*

This project is a competition to find the best policy for three different Markov decision processes given sampled transitions, each consisting of a state ( s ), action ( a ), reward ( r ), and next state ( sp ). The best policy is the one that maximizes the total expected reward.

Three CSV-formatted datasets will be provided. Note that this is an instance of *batch reinforcement learning *where the data from exploring has been collected and given to you, and you have to work with it. There will not be an opportunity to explore in runtime in a simulator, because your output will be a (deterministic) policy which we will run on our simulator. Keep this in mind, particularly if you are using a model-free approach.

- small.csv
*10 x 10 grid world (100 states) with 4 actions. Use*`LinearIndices((10,10))[x,y]`

*to find the integer state from the x and y coordinates. Actions are 1:left, 2:right, 3:up, 4:down. The discount factor is 0.95.* - medium.csv
*MountainCarContinuous-v0 environment from Open AI Gym with altered parameters. State measurements are given by integers with 500 possible position values and 100 possible velocity values (50,000 possible state measurements).*`1+pos+500*vel`

gives the integer corresponding to a state with position`pos`

and velocity`vel`

. There are 7 actions that represent different amounts of acceleration. This problem is undiscounted, and ends when the goal (the flag) is reached. Note that since the discrete state measurements are calculated after the simulation, the data in`medium.csv`

does not quite satisfy the Markov property. - large.csv
*It’s a secret! MDP with 312020 states and 9 actions, with a discount factor of 0.95. This problem has a lot of hidden structure. Look at the transitions and rewards carefully and you might figure some of it out!*

Each line in the CSV file represents a transition from state (s) to (sp) (along with the associated reward (r)), when taking an action (a). You might not have data for every state, nor are all states equally important. Use your best judgment when handling this.

The files can be accessed from the `AA228Student/workspace/project2/`

directory (follow step 1 for Project 0 if you do not have the `AA228Student`

repository). To update your local repository, run `git pull`

then `julia aa228download.jl`

from the `AA228Student`

directory. You will use `submit`

to evaluate and submit your policies.

## Rules

- You can use any programming language but you
**cannot use any package/algorithms directly related to reinforcement learning**. You can use general optimization packages and libraries for things like function approximation and ODE solving so long as you discuss what you use in your writeup and make it clear how it is used in your code. - You are free to use the data structures of POMDPs.jl, just not any algorithms.
- You can use different approaches and algorithms for the different problems. It will probably help to look at the data carefully and use the structure of the problem to your advantage.
- Discussions are encouraged, and we expect similar approaches to the project from multiple people, but you must write your own code. Otherwise, it violates the Stanford Honor Code.
- You may look at the code for the MountainCarContinuous-v0 environment, but note that we will not use exactly the same parameters.
- Submit a README PDF file with a description of your algorithm(s) and their performance characteristics. Include the running/training time for each policy.
**Grading Rubric:**- Small Problem – 10%
- Medium Problem – 20%
- Large Problem – 30%
- README.pdf – 30%
- Description of strategy – 15%
- Performance characteristics; running/training time – 15%

- Code – 10%

*Note: ***The scores on the leaderboard are after subtracting the score of a random policy, so you should try to at least get positive leaderboard scores to get credit. If your scores are just slightly greater than 0, we will look closely at your code to make sure you implemented something reasonable. For the large.csv, the difference is additionally multiplied by 10 to give it a higher weight for the leaderboard (not for grading).**

## Submission

**Ensure you have the**`AA228Student`

repository locally—if not, follow step 1 for Project 0.**Navigate to the**`AA228Student/workspace/project2/`

directory.- Run
`git pull`

to get any updates before starting work.

- Run
**You should find the three CSV files**:`small.csv`

,`medium.csv`

and`large.csv`

in your Project 2 workspace.**Work on the project in the**, or`AA228Student/workspace/project2/`

directory**copy your .policy files**there.- The following files need to be submitted using
`submit`

(and located in the`project2`

directory):`small.policy`

,`medium.policy`

, and`large.policy`

.

- The following files need to be submitted using
**Run the**`submit`

command to record policy scores (from`AA228Student/workspace/`

).`submit project2 <YOUR_EMAIL>@stanford.edu <NICKNAME>`

- Note, use
on`./submit`

*Linux/Mac OS X*andon`submit`

*Windows*. - If you do not provide a
your SUNet ID will be used (i.e.`<NICKNAME>`

without “@stanford.edu”)`<YOUR_EMAIL>`

- Note, use
- It will complain if any files are missing and tell you your score for each
`.policy`

file. It will also submit your`.policy`

files for evaluation and place it on the leaderboard.**Please be sure to keep your terminal open until you receive a***successful***acknowledgement from the server (this may take a minute).** - Check your leaderboard status with:
`submit status <YOUR_EMAIL>@stanford.edu`

**Important, submit supplemental material using Canvas.****Login to http://canvas.stanford.edu and navigate to this course.****Use the sidebar to go to Assignments**. Click on Project 2.**Upload the following files to Project 2:**- Code file(s),
`README.pdf`

,`small.policy`

,`medium.policy`

, and`large.policy`

.

- Code file(s),

*Note: *If you are having trouble submitting through `AA228Student`

, we put together a video demonstration.

## Leaderboard

Click here to view the live leaderboard.

## FAQ

*This list has been extended from last year to reflect common queries made on Piazza. You may find our query answered here without even needing to wait on Piazza!*

**What programming languages can I use?**- You may use any language you like! We are only looking for your source code and the output
`.policy`

files.

- You may use any language you like! We are only looking for your source code and the output
**How do I convert linear indices to subscripts and vice versa?**- For Julia, use
`CartesianIndices`

and`LinearIndices`

. For Python, use`numpy.unravel_index`

and`numpy.ravel_multi_index`

. For MATLAB, use`ind2sub`

and`sub2ind`

.

- For Julia, use
**I like the competition and leaderboard aspect. Can we use late days for the competition?**- No. You can only use late days for the general project grading. Any submissions after the deadline will not be considered for the leaderboard.

**What’s the output format for the**`.policy`

file?- Your program should output a file with the same name as the input filename, but with a
`.policy`

extension, e.g., “small.policy”.

Each output file should contain an action for every possible state in the problem. The i-th row contains the action to be taken from the i-th state.`small.policy`

should contain 100 lines,`medium.policy`

should contain 50000 lines, and`large.policy`

should contain 312020 lines. Each line should have a number between 1 and the total number of actions for that problem. If using Linux/MAC, you can check the number of lines using`wc -l <file>`

e.g.`wc -l small.policy`

.

- Your program should output a file with the same name as the input filename, but with a
**What does the score represent?**- For each problem, the score is the expected value for a state drawn from initial distribution D, that is \(\underset{s \sim D}{E}[U(s)]\). For all problems, we evaluate this expectation using simulations.
*The higher the score, the better.*If the scores are too close, we’ll increase the number of Monte Carlo simulations.**The scores on the leaderboard are after subtracting the score of a random policy.**Furthermore, the*score for submissions*uses fewer Monte Carlo simulations than the score for the leaderboard so they won’t match each other exactly.

- For each problem, the score is the expected value for a state drawn from initial distribution D, that is \(\underset{s \sim D}{E}[U(s)]\). For all problems, we evaluate this expectation using simulations.
**What’s the reward?**- All we can say is that the reward is a function of both the states and actions.

**What packages***can*we use?- For the computational part, sparse matrix libraries (builtin in numpy and Julia) could be useful. Feel free to use any deep learning and machine learning packages as well (as long as you are not using any RL implementations therein). For building models with the data, data munging libraries like
`DataFrames.jl`

, python’s Pandas, or graphing libraries like`StatPlots.jl`

might be useful.

- For the computational part, sparse matrix libraries (builtin in numpy and Julia) could be useful. Feel free to use any deep learning and machine learning packages as well (as long as you are not using any RL implementations therein). For building models with the data, data munging libraries like
**What about the terminal states?**- There are no terminal states for
`small.csv`

and`large.csv`

. For`medium.csv`

, the simulation ends when the new position is beyond some position (where the flag is). You may want to think about which integer state measurements correspond to this case.

- There are no terminal states for
**Why does the same**`(s,a)`

lead to a different`sp`

in some lines?- The MDP is stochastic i.e. the transition function defines a probability distribution over next states. So yes, sometimes you’ll reach different states from the same state, even though you took the same action. However, the probabilities might be different. Also, for a given
`(s,a) -> sp`

, the reward is always the same (with the exception of the medium dataset).

- The MDP is stochastic i.e. the transition function defines a probability distribution over next states. So yes, sometimes you’ll reach different states from the same state, even though you took the same action. However, the probabilities might be different. Also, for a given
**Is it necessary to implement a score function for this project?**- This project is a bit different. In project 1, if you implemented the scoring function correctly, you would know your place on the leaderboard. However, for project 2, you are evaluated on an MDP model that is kept secret from you — samples from this model are provided to you and you can create an approximation of it, but you don’t know it exactly.

**Does the value of the discount factor matter?**- Yes. Technically you could pick whatever discount factor you want, but the policy will be evaluated with the specified discount factor. Generally you’d prefer to train with the discount factor you’ll be evaluated with, but there exist situations where playing with the discount factor can improve performance.

**What can we do when we reach a state that has no actions or reward associated with it?**- You can use one of the interpolation schemes to approximate from the available states. This can be particularly possible for the medium problem, but will be difficult for the large problem because of the underlying structure.

**If we are getting negative leaderboard scores, what does that mean?**- On the leaderboard, we are subtracting the scores that we get from a random policy, and a negative value shows that you are doing worse than random. You should definitely be able to do better than a random policy, especially for the small problem.

**If a state is missing actions, do we assign a \(– \infty\) reward for the missing actions?**- That’s a modeling decision you need to make. But propagating -inf rewards can be tricky.

**What do I need to do for full credit?**- Any RL algorithm with reasonable positive leaderboard scores will be fine. You can use different algorithms for each problem, and we recommend that you try out several. Please mention everything you try in your README PDF.

**Is there any way we can get a script we can run locally to score our policies? Alternatively, is there any way we can implement an estimate of the score?**- You are allowed to run the
`submit`

script as much as you’d like, and that will give you feedback for each of your policies. Note that all three policy files need to exist before`submit`

will score any of them. Your final submission will be the one that’s graded and each submission will replace the previous one on the leaderboard. We do not want students to rely on the feedback from`submit`

as the only means of testing their algorithms—we encourage plotting your policies and reward values to see if they’re reasonable (not required).

- You are allowed to run the
**When it is mentioned that there are 500 possible position values, and 100 possible velocity values, does this mean that velocity values are [1, 100], or can velocity values be negative or out of this range?**- Velocity values can be negative. They are definitely not in the range [1,100]. Same with position. Again, the real values are continuous. They have just been discretized in the given fashion. You are welcome to look at the original OpenAI environment linked in the description.