# PPO2

* Original paper: <https://arxiv.org/abs/1707.06347>
* Baselines blog post: <https://blog.openai.com/openai-baselines-ppo/>
* `python -m baselines.ppo2.run_atari` runs the algorithm for 40M frames = 10M timesteps on an Atari game. See help (`-h`) for more options.
* `python -m baselines.ppo2.run_mujoco` runs the algorithm for 1M frames on a Mujoco environment.
