Deep Q-Learning (DQN)¶
DQN is an extension of Q-learning that makes use of a replay buffer and target network to improve learning stability.
DQN paper: https://arxiv.org/abs/1312.5602
Can I use it?¶
Action |
Observation |
|
---|---|---|
Discrete |
✔️ |
✔️ |
Continuous |
❌ |
✔️ |
Example¶
import gymnasium as gym
from agilerl.utils.utils import makeVectEnvs
from agilerl.components.replay_buffer import ReplayBuffer
from agilerl.algorithms.dqn import DQN
# Create environment and Experience Replay Buffer
env = makeVectEnvs('LunarLander-v2', num_envs=1)
try:
state_dim = env.single_observation_space.n # Discrete observation space
one_hot = True # Requires one-hot encoding
except:
state_dim = env.single_observation_space.shape # Continuous observation space
one_hot = False # Does not require one-hot encoding
try:
action_dim = env.single_action_space.n # Discrete action space
except:
action_dim = env.single_action_space.shape[0] # Continuous action space
channels_last = False # Swap image channels dimension from last to first [H, W, C] -> [C, H, W]
if channels_last:
state_dim = (state_dim[2], state_dim[0], state_dim[1])
field_names = ["state", "action", "reward", "next_state", "done"]
memory = ReplayBuffer(action_dim=action_dim, memory_size=10000, field_names=field_names)
agent = DQN(state_dim=state_dim, action_dim=action_dim, one_hot=one_hot) # Create DQN agent
state = env.reset()[0] # Reset environment at start of episode
while True:
if channels_last:
state = np.moveaxis(state, [3], [1])
action = agent.getAction(state, epsilon) # Get next action from agent
next_state, reward, done, _, _ = env.step(action) # Act in environment
# Save experience to replay buffer
if channels_last:
memory.save2memoryVectEnvs(state, action, reward, np.moveaxis(next_state, [3], [1]), done)
else:
memory.save2memoryVectEnvs(state, action, reward, next_state, done)
# Learn according to learning frequency
if memory.counter % agent.learn_step == 0 and len(memory) >= agent.batch_size:
experiences = memory.sample(agent.batch_size) # Sample replay buffer
agent.learn(experiences) # Learn according to agent's RL algorithm
To configure the network architecture, pass a dict to the DQN net_config
field. For an MLP, this can be as simple as:
NET_CONFIG = {
'arch': 'mlp', # Network architecture
'hidden_size': [32, 32] # Network hidden size
}
Or for a CNN:
NET_CONFIG = {
'arch': 'cnn', # Network architecture
'hidden_size': [128], # Network hidden size
'channel_size': [32, 32], # CNN channel size
'kernel_size': [8, 4], # CNN kernel size
'stride_size': [4, 2], # CNN stride size
'normalize': True # Normalize image from range [0,255] to [0,1]
}
agent = DQN(state_dim=state_dim, action_dim=action_dim, one_hot=one_hot, net_config=NET_CONFIG) # Create DQN agent
Saving and loading agents¶
To save an agent, use the saveCheckpoint
method:
from agilerl.algorithms.dqn import DQN
agent = DQN(state_dim=state_dim, action_dim=action_dim, one_hot=one_hot) # Create DQN agent
checkpoint_path = "path/to/checkpoint"
agent.saveCheckpoint(checkpoint_path)
To load a saved agent, use the load
method:
from agilerl.algorithms.dqn import DQN
checkpoint_path = "path/to/checkpoint"
agent = DQN.load(checkpoint_path)
Parameters¶
- class agilerl.algorithms.dqn.DQN(state_dim, action_dim, one_hot, index=0, net_config={'arch': 'mlp', 'hidden_size': [64, 64]}, batch_size=64, lr=0.0001, learn_step=5, gamma=0.99, tau=0.001, mut=None, double=False, actor_network=None, device='cpu', accelerator=None, wrap=True)¶
The DQN algorithm class. DQN paper: https://arxiv.org/abs/1312.5602
- Parameters:
action_dim (int) – Action dimension
one_hot (bool) – One-hot encoding, used with discrete observation spaces
index (int, optional) – Index to keep track of object instance during tournament selection and mutation, defaults to 0
net_config (dict, optional) – Network configuration, defaults to mlp with hidden size [64,64]
batch_size (int, optional) – Size of batched sample from replay buffer for learning, defaults to 64
lr (float, optional) – Learning rate for optimizer, defaults to 1e-4
learn_step (int, optional) – Learning frequency, defaults to 5
gamma (float, optional) – Discount factor, defaults to 0.99
tau (float, optional) – For soft update of target network parameters, defaults to 1e-3
mut (str, optional) – Most recent mutation to agent, defaults to None
double (bool, optional) – Use double Q-learning, defaults to False
actor_network (nn.Module, optional) – Custom actor network, defaults to None
device (str, optional) – Device for accelerated computing, ‘cpu’ or ‘cuda’, defaults to ‘cpu’
accelerator (accelerate.Accelerator(), optional) – Accelerator for distributed computing, defaults to None
wrap (bool, optional) – Wrap models for distributed training upon creation, defaults to True
- clone(index=None, wrap=True)¶
Returns cloned agent identical to self.
- Parameters:
index (int, optional) – Index to keep track of agent for tournament selection and mutation, defaults to None
- getAction(state, epsilon=0, action_mask=None)¶
Returns the next action to take in the environment. Epsilon is the probability of taking a random action, used for exploration. For epsilon-greedy behaviour, set epsilon to 0.
- learn(experiences)¶
Updates agent network parameters to learn from experiences.
- Parameters:
experiences – List of batched states, actions, rewards, next_states, dones in that order.
- classmethod load(path, device='cpu', accelerator=None)¶
Creates agent with properties and network weights loaded from path.
- Parameters:
path (string) – Location to load checkpoint from
device (str, optional) – Device for accelerated computing, ‘cpu’ or ‘cuda’, defaults to ‘cpu’
accelerator (accelerate.Accelerator(), optional) – Accelerator for distributed computing, defaults to None
- loadCheckpoint(path)¶
Loads saved agent properties and network weights from checkpoint.
- Parameters:
path (string) – Location to load checkpoint from
- saveCheckpoint(path)¶
Saves a checkpoint of agent properties and network weights to path.
- Parameters:
path (string) – Location to save checkpoint at
- softUpdate()¶
Soft updates target network.
- test(env, swap_channels=False, max_steps=500, loop=3)¶
Returns mean test score of agent in environment with epsilon-greedy policy.
- Parameters:
env (Gym-style environment) – The environment to be tested in
swap_channels (bool, optional) – Swap image channels dimension from last to first [H, W, C] -> [C, H, W], defaults to False
max_steps (int, optional) – Maximum number of testing steps, defaults to 500
loop (int, optional) – Number of testing loops/episodes to complete. The returned score is the mean over these tests. Defaults to 3