Imitation#

Imitation provides clean implementations of imitation and reward learning algorithms, under a unified and user-friendly API. Currently, we have implementations of Behavioral Cloning, DAgger (with synthetic examples), density-based reward modeling, Maximum Causal Entropy Inverse Reinforcement Learning, Adversarial Inverse Reinforcement Learning, Generative Adversarial Imitation Learning, and Deep RL from Human Preferences.

You can find us on GitHub at http://github.com/HumanCompatibleAI/imitation.

Main Features#

  • Built on and compatible with Stable Baselines 3 (SB3).

  • Modular Pytorch implementations of Behavioral Cloning, DAgger, GAIL, and AIRL that can train arbitrary SB3 policies.

  • GAIL and AIRL have customizable reward and discriminator networks.

  • Scripts to train policies using SB3 and save rollouts from these policies as synthetic “expert” demonstrations.

  • Data structures and scripts for loading and storing expert demonstrations.

Citing imitation#

If you use imitation in your research project, please cite our paper to help us track our impact and enable readers to more easily replicate your results. You may use the following BibTeX:

@misc{gleave2022imitation,
  author = {Gleave, Adam and Taufeeque, Mohammad and Rocamonde, Juan and Jenner, Erik and Wang, Steven H. and Toyer, Sam and Ernestus, Maximilian and Belrose, Nora and Emmons, Scott and Russell, Stuart},
  title = {imitation: Clean Imitation Learning Implementations},
  year = {2022},
  howPublished = {arXiv:2211.11972v1 [cs.LG]},
  archivePrefix = {arXiv},
  eprint = {2211.11972},
  primaryClass = {cs.LG},
  url = {https://arxiv.org/abs/2211.11972},
}

API Reference#

imitation

imitation: implementations of imitation and reward learning algorithms.

Index#