Skip to content

v0.1.0 (first release)

Compare
Choose a tag to compare
@Lucaweihs Lucaweihs released this 01 Sep 21:06
62e33fd

AllenAct is a modular and flexible learning framework designed with a focus on the unique requirements of Embodied-AI research. It provides first-class support for a growing collection of embodied environments, tasks and algorithms, provides reproductions of state-of-the-art models, and includes extensive documentation, tutorials, start-up code, and pre-trained models.

In this first release we provide:

  • Support for several environments: We support different environments used for Embodied AI research such as AI2-THOR, Habitat and MiniGrid. We have made it easy to incorporate new environments.
  • Different input modalities: The framework supports a variety of input modalities such as RGB images, depth, language, and GPS readings.
  • Customizable training pipelines: The framework includes not only various training algorithms (A2C, PPO, DAgger, etc.) but also allows one to easily combine these algorithms in pipelines (e.g., imitation learning followed by reinforcement learning).

AllenAct currently supports the following environments, tasks, and algorithms. We are actively working on integrating recently developed models and frameworks. Moreover, in our documentation, we provide tutorials to demonstrating how to integrate the algorithms, tasks, and environments of your choice.

Environments Tasks Algorithms
iTHOR, RoboTHOR, Habitat, MiniGrid PointNav, ObjectNav, MiniGrid tasks A2C, PPO, DD-PPO, DAgger, Off-policy Imitation

Note that we allow for distributed training of all above algorithms.