Multi-Agent Experience Replay Buffer

In order to efficiently train a population of RL agents, off-policy algorithms must be used to share memory within populations. This reduces the exploration needed by an individual agent because it allows faster learning from the behaviour of other agents. For example, if you were able to watch a bunch of people attempt to solve a maze, you could learn from their mistakes and successes without necessarily having to explore the entire maze yourself.

The object used to store experiences collected by agents in the environment is called the Experience Replay Buffer, and is defined by the class MultiAgentReplayBuffer() for multi-agent environments. During training it can be added to using the MultiAgentReplayBuffer.save_to_memory() function and sampled using the MultiAgentReplayBuffer.sample().

from agilerl.components.multi_agent_replay_buffer import MultiAgentReplayBuffer
import torch

field_names = ["state", "action", "reward", "next_state", "done"]
memory = MultiAgentReplayBuffer(memory_size=1_000_000,          # Max replay buffer size
                                field_names=field_names,        # Field names to store in memory
                                agent_ids=INIT_HP['AGENT_IDS'], # ID for each agent
                                device=torch.device("cuda"))

Parameters

class agilerl.components.multi_agent_replay_buffer.MultiAgentReplayBuffer(memory_size, field_names, agent_ids, device=None)

The Multi-Agent Experience Replay Buffer class. Used to store multiple agents’ experiences and allow off-policy learning.

Parameters:
  • memory_size (int) – Maximum length of the replay buffer

  • field_names (list[str]) – Field names for experience named tuple, e.g. [‘state’, ‘action’, ‘reward’]

  • agent_ids (list[str]) – Names of all agents that will act in the environment

  • device (str, optional) – Device for accelerated computing, ‘cpu’ or ‘cuda’, defaults to None

sample(batch_size, *args)

Returns sample of experiences from memory.

Parameters:

batch_size (int) – Number of samples to return

save_to_memory(*args, is_vectorised=False)

Applies appropriate save_to_memory function depending on whether the environment is vectorised or not.

Parameters:
  • *args

    Variable length argument list. Contains batched or unbatched transition elements in consistent order, e.g. states, actions, rewards, next_states, dones

  • is_vectorised (bool) – Boolean flag indicating if the environment has been vectorised

save_to_memory_single_env(*args)

Saves experience to memory.

Parameters:

*args

Variable length argument list. Contains transition elements in consistent order, e.g. state, action, reward, next_state, done

save_to_memory_vect_envs(*args)

Saves multiple experiences to memory.

Parameters:

*args

Variable length argument list. Contains batched transition elements in consistent order, e.g. states, actions, rewards, next_states, dones