imitation.data.rollout#

Methods to collect, analyze and manipulate transition and trajectory rollouts.

Functions

discounted_sum(arr, gamma)

Calculate the discounted sum of arr.

flatten_trajectories(trajectories)

Flatten a series of trajectory dictionaries into arrays.

flatten_trajectories_with_rew(trajectories)

rtype

TransitionsWithRew

generate_trajectories(policy, venv, ...[, ...])

Generate trajectory dictionaries from a policy and an environment.

generate_transitions(policy, venv, ...[, ...])

Generate obs-action-next_obs-reward tuples.

make_min_episodes(n)

Terminate after collecting n episodes of data.

make_min_timesteps(n)

Terminate at the first episode after collecting n timesteps of data.

make_sample_until([min_timesteps, min_episodes])

Returns a termination condition sampling for a number of timesteps and episodes.

policy_to_callable(policy, venv[, ...])

Converts any policy-like object into a function from observations to actions.

rollout(policy, venv, sample_until, rng, *)

Generate policy rollouts.

rollout_stats(trajectories)

Calculates various stats for a sequence of trajectories.

unwrap_traj(traj)

Uses RolloutInfoWrapper-captured obs and rews to replace fields.

Classes

TrajectoryAccumulator()

Accumulates trajectories step-by-step.

class imitation.data.rollout.TrajectoryAccumulator[source]#

Bases: object

Accumulates trajectories step-by-step.

Useful for collecting completed trajectories while ignoring partially-completed trajectories (e.g. when rolling out a VecEnv to collect a set number of transitions). Each in-progress trajectory is identified by a ‘key’, which enables several independent trajectories to be collected at once. They key can also be left at its default value of None if you only wish to collect one trajectory.

__init__()[source]#

Initialise the trajectory accumulator.

add_step(step_dict, key=None)[source]#

Add a single step to the partial trajectory identified by key.

Generally a single step could correspond to, e.g., one environment managed by a VecEnv.

Parameters
  • step_dict (Mapping[str, Union[ndarray, DictObs, Mapping[str, Any]]]) – dictionary containing information for the current step. Its keys could include any (or all) attributes of a TrajectoryWithRew (e.g. “obs”, “acts”, etc.).

  • key (Optional[Hashable]) – key to uniquely identify the trajectory to append to, if working with multiple partial trajectories.

Return type

None

add_steps_and_auto_finish(acts, obs, rews, dones, infos)[source]#

Calls add_step repeatedly using acts and the returns from venv.step.

Also automatically calls finish_trajectory() for each done == True. Before calling this method, each environment index key needs to be initialized with the initial observation (usually from venv.reset()).

See the body of util.rollout.generate_trajectory for an example.

Parameters
  • acts (ndarray) – Actions passed into VecEnv.step().

  • obs (Union[ndarray, DictObs, Dict[str, ndarray]]) – Return value from VecEnv.step(acts).

  • rews (ndarray) – Return value from VecEnv.step(acts).

  • dones (ndarray) – Return value from VecEnv.step(acts).

  • infos (List[dict]) – Return value from VecEnv.step(acts).

Return type

List[TrajectoryWithRew]

Returns

A list of completed trajectories. There should be one trajectory for each True in the dones argument.

finish_trajectory(key, terminal)[source]#

Complete the trajectory labelled with key.

Parameters
  • key (Hashable) – key uniquely identifying which in-progress trajectory to remove.

  • terminal (bool) – trajectory has naturally finished (i.e. includes terminal state).

Returns

list of completed trajectories popped from

self.partial_trajectories.

Return type

traj

imitation.data.rollout.discounted_sum(arr, gamma)[source]#

Calculate the discounted sum of arr.

If arr is an array of rewards, then this computes the return; however, it can also be used to e.g. compute discounted state occupancy measures.

Parameters
  • arr (ndarray) – 1 or 2-dimensional array to compute discounted sum over. Last axis is timestep, from current time step (first) to last timestep (last). First axis (if present) is batch dimension.

  • gamma (float) – the discount factor used.

Return type

Union[ndarray, float]

Returns

The discounted sum over the timestep axis. The first timestep is undiscounted, i.e. we start at gamma^0.

imitation.data.rollout.flatten_trajectories(trajectories)[source]#

Flatten a series of trajectory dictionaries into arrays.

Parameters

trajectories (Iterable[Trajectory]) – list of trajectories.

Return type

Transitions

Returns

The trajectories flattened into a single batch of Transitions.

imitation.data.rollout.flatten_trajectories_with_rew(trajectories)[source]#
Return type

TransitionsWithRew

imitation.data.rollout.generate_trajectories(policy, venv, sample_until, rng, *, deterministic_policy=False)[source]#

Generate trajectory dictionaries from a policy and an environment.

Parameters
  • policy (Union[BaseAlgorithm, BasePolicy, Callable[[Union[ndarray, Dict[str, ndarray]], Optional[Tuple[ndarray, ...]], Optional[ndarray]], Tuple[ndarray, Optional[Tuple[ndarray, ...]]]], None]) – Can be any of the following: 1) A stable_baselines3 policy or algorithm trained on the gym environment. 2) A Callable that takes an ndarray of observations and returns an ndarray of corresponding actions. 3) None, in which case actions will be sampled randomly.

  • venv (VecEnv) – The vectorized environments to interact with.

  • sample_until (Callable[[Sequence[TrajectoryWithRew]], bool]) – A function determining the termination condition. It takes a sequence of trajectories, and returns a bool. Most users will want to use one of min_episodes or min_timesteps.

  • deterministic_policy (bool) – If True, asks policy to deterministically return action. Note the trajectories might still be non-deterministic if the environment has non-determinism!

  • rng (Generator) – used for shuffling trajectories.

Return type

Sequence[TrajectoryWithRew]

Returns

Sequence of trajectories, satisfying sample_until. Additional trajectories may be collected to avoid biasing process towards short episodes; the user should truncate if required.

imitation.data.rollout.generate_transitions(policy, venv, n_timesteps, rng, *, truncate=True, **kwargs)[source]#

Generate obs-action-next_obs-reward tuples.

Parameters
  • policy (Union[BaseAlgorithm, BasePolicy, Callable[[Union[ndarray, Dict[str, ndarray]], Optional[Tuple[ndarray, ...]], Optional[ndarray]], Tuple[ndarray, Optional[Tuple[ndarray, ...]]]], None]) – Can be any of the following: - A stable_baselines3 policy or algorithm trained on the gym environment - A Callable that takes an ndarray of observations and returns an ndarray of corresponding actions - None, in which case actions will be sampled randomly

  • venv (VecEnv) – The vectorized environments to interact with.

  • n_timesteps (int) – The minimum number of timesteps to sample.

  • rng (Generator) – The random state to use for sampling trajectories.

  • truncate (bool) – If True, then drop any additional samples to ensure that exactly n_timesteps samples are returned.

  • **kwargs – Passed-through to generate_trajectories.

Return type

TransitionsWithRew

Returns

A batch of Transitions. The length of the constituent arrays is guaranteed to be at least n_timesteps (if specified), but may be greater unless truncate is provided as we collect data until the end of each episode.

imitation.data.rollout.make_min_episodes(n)[source]#

Terminate after collecting n episodes of data.

Parameters

n (int) – Minimum number of episodes of data to collect. May overshoot if two episodes complete simultaneously (unlikely).

Return type

Callable[[Sequence[TrajectoryWithRew]], bool]

Returns

A function implementing this termination condition.

imitation.data.rollout.make_min_timesteps(n)[source]#

Terminate at the first episode after collecting n timesteps of data.

Parameters

n (int) – Minimum number of timesteps of data to collect. May overshoot to nearest episode boundary.

Return type

Callable[[Sequence[TrajectoryWithRew]], bool]

Returns

A function implementing this termination condition.

imitation.data.rollout.make_sample_until(min_timesteps=None, min_episodes=None)[source]#

Returns a termination condition sampling for a number of timesteps and episodes.

Parameters
  • min_timesteps (Optional[int]) – Sampling will not stop until there are at least this many timesteps.

  • min_episodes (Optional[int]) – Sampling will not stop until there are at least this many episodes.

Return type

Callable[[Sequence[TrajectoryWithRew]], bool]

Returns

A termination condition.

Raises

ValueError – Neither of n_timesteps and n_episodes are set, or either are non-positive.

imitation.data.rollout.policy_to_callable(policy, venv, deterministic_policy=False)[source]#

Converts any policy-like object into a function from observations to actions.

Return type

Callable[[Union[ndarray, Dict[str, ndarray]], Optional[Tuple[ndarray, ...]], Optional[ndarray]], Tuple[ndarray, Optional[Tuple[ndarray, ...]]]]

imitation.data.rollout.rollout(policy, venv, sample_until, rng, *, unwrap=True, exclude_infos=True, verbose=True, **kwargs)[source]#

Generate policy rollouts.

This method is a wrapper of generate_trajectories that allows the user to additionally replace the rewards and observations with the original values if the environment is wrapped, to exclude the infos from the trajectories, and to print summary statistics of the rollout.

The .infos field of each Trajectory is set to None to save space.

Parameters
  • policy (Union[BaseAlgorithm, BasePolicy, Callable[[Union[ndarray, Dict[str, ndarray]], Optional[Tuple[ndarray, ...]], Optional[ndarray]], Tuple[ndarray, Optional[Tuple[ndarray, ...]]]], None]) – Can be any of the following: 1) A stable_baselines3 policy or algorithm trained on the gym environment. 2) A Callable that takes an ndarray of observations and returns an ndarray of corresponding actions. 3) None, in which case actions will be sampled randomly.

  • venv (VecEnv) – The vectorized environments.

  • sample_until (Callable[[Sequence[TrajectoryWithRew]], bool]) – End condition for rollout sampling.

  • rng (Generator) – Random state to use for sampling.

  • unwrap (bool) – If True, then save original observations and rewards (instead of potentially wrapped observations and rewards) by calling unwrap_traj().

  • exclude_infos (bool) – If True, then exclude infos from pickle by setting this field to None. Excluding infos can save a lot of space during pickles.

  • verbose (bool) – If True, then print out rollout stats before saving.

  • **kwargs – Passed through to generate_trajectories.

Return type

Sequence[TrajectoryWithRew]

Returns

Sequence of trajectories, satisfying sample_until. Additional trajectories may be collected to avoid biasing process towards short episodes; the user should truncate if required.

imitation.data.rollout.rollout_stats(trajectories)[source]#

Calculates various stats for a sequence of trajectories.

Parameters

trajectories (Sequence[TrajectoryWithRew]) – Sequence of trajectories.

Return type

Mapping[str, float]

Returns

Dictionary containing n_traj collected (int), along with episode return statistics (keys: {monitor_,}return_{min,mean,std,max}, float values) and trajectory length statistics (keys: len_{min,mean,std,max}, float values).

return_* values are calculated from environment rewards. monitor_* values are calculated from Monitor-captured rewards, and are only included if the trajectories contain Monitor infos.

imitation.data.rollout.unwrap_traj(traj)[source]#

Uses RolloutInfoWrapper-captured obs and rews to replace fields.

This can be useful for bypassing other wrappers to retrieve the original obs and rews.

Fails if infos is None or if the trajectory was generated from an environment without imitation.data.wrappers.RolloutInfoWrapper

Parameters

traj (TrajectoryWithRew) – A trajectory generated from RolloutInfoWrapper-wrapped Environments.

Return type

TrajectoryWithRew

Returns

A copy of traj with replaced obs and rews fields.

Raises

ValueError – If traj.infos is None