Evolvable BERT

Parameters

class agilerl.modules.bert.EvolvableBERT(*args, **kwargs)

The Evolvable BERT class.

Parameters:
  • encoder_layers (list[int]) – Encoder layer(s) hidden size

  • decoder_layers (list[int]) – Decoder layer(s) hidden size

  • end2end (bool, optional) – End to end transformer, using positional and token embeddings, defaults to True

  • src_vocab_size (int, optional) – Source vocabulary size, defaults to 10837

  • tgt_vocab_size (int, optional) – Target vocabulary size, defaults to 10837

  • encoder_norm (bool, optional) – Encoder output normalization, defaults to True

  • decoder_norm (bool, optional) – Decoder output normalization, defaults to True

  • d_model (int, optional) – Number of expected features in the encoder/decoder inputs, defaults to 512

  • n_head (int, optional) – Number of heads in the multiheadattention models, defaults to 8

  • dropout (float, optional) – Dropout value, defaults to 0.1

  • activation (str, optional) – Activation function of encoder/decoder intermediate layer, defaults to ‘ReLU’

  • layer_norm_eps (float, optional) – Epsilon value in layer normalization components, defaults to 1e-5

  • batch_first (bool, optional) – Input/output tensor order. True:(batch, seq, feat.) False:(seq, batch, feat.). Defaults to False

  • norm_first (bool, optional) – Perform LayerNorm before other attention and feedforward operations, defaults to False

  • max_encoder_layers (int, optional) – Maximum number of encoder layers, defaults to 12

  • max_decoder_layers (int, optional) – Maximum number of decoder layers, defaults to 12

  • device (str, optional) – Device for accelerated computing, ‘cpu’ or ‘cuda’, defaults to ‘cpu’

add_decoder_layer()

Adds a decoder layer to transformer.

add_encoder_layer()

Adds an encoder layer to transformer.

add_node(network: str | None = None, hidden_layer: int | None = None, numb_new_nodes: int | None = None) Dict[str, Any]

Adds nodes to hidden layer of encoder/decoder.

Parameters:
  • network (str, optional) – Network to add node to, ‘encoder’ or ‘decoder’, defaults to None

  • hidden_layer (int, optional) – Depth of hidden layer to add nodes to, defaults to None

  • numb_new_nodes (int, optional) – Number of nodes to add to hidden layer, defaults to None

Returns:

Dictionary containing hidden layer, number of new nodes and network

Return type:

Dict[str, Any]

build_networks()

Creates and returns transformer neural network.

check_encoder_sparsity_fast_path(src: Tensor, output: Tensor, first_layer: Module, str_first_layer: str, mask: Tensor, src_key_padding_mask: Tensor, src_key_padding_mask_for_layers: Tensor) Tuple[Tensor, bool, Tensor]

Returns encoder output, conversion to nested and padding mask depending on if sparsity fast path possible. :param src: Encoder input sequence :type src: torch.Tensor :param output: Encoder output sequence :type output: torch.Tensor :param first_layer: First layer of encoder :type first_layer: torch.Module() :param str_first_layer: Name of first layer of encoder :type str_first_layer: str :param mask: Mask for the src sequence :type mask: torch.Tensor :param src_key_padding_mask: Tensor mask for src keys per batch :type src_key_padding_mask: torch.Tensor :param src_key_padding_mask_for_layers: Tensor mask for src keys per batch for layers :type src_key_padding_mask_for_layers: torch.Tensor

count_parameters(without_layer_norm: bool = False) int

Returns number of parameters in neural network.

Parameters:

without_layer_norm (bool, optional) – Exclude normalization layers, defaults to False

create_mask(src: Tensor, tgt: Tensor, pad_idx: int) Tuple[Tensor, ...]

Returns masks to hide source and target padding tokens.

Parameters:
  • src (torch.Tensor) – Source

  • tgt (torch.Tensor) – Target

  • pad_idx (int) – Index of padding symbol <pad> in special symbols list

decode(tgt: Tensor, memory: Tensor, tgt_mask: Tensor | None = None, memory_mask: Tensor | None = None, tgt_key_padding_mask: Tensor | None = None, memory_key_padding_mask: Tensor | None = None) Tuple[Tensor, Tuple[Tensor, ...]]

Returns decoded transformer input.

Parameters:
  • tgt (torch.Tensor) – Decoder input sequence

  • memory (torch.Tensory) – Encoder output sequence

  • tgt_mask (torch.Tensor, optional) – Additive mask for the tgt sequence, defaults to None

  • memory_mask (torch.Tensor, optional) – Additive mask for the encoder output, defaults to None

  • tgt_key_padding_mask (torch.Tensor, optional) – Tensor mask for tgt keys per batch, defaults to None

  • memory_key_padding_mask (torch.Tensor, optional) – Tensor mask for memory keys per batch, defaults to None

encode(src: Tensor, src_mask: Tensor | None = None, src_key_padding_mask: Tensor | None = None, is_causal: bool = False) Tuple[Tensor, Tuple[Tensor, ...]]

Returns encoded transformer input.

Parameters:
  • src (torch.Tensor) – Encoder input sequence

  • src_mask (torch.Tensor, optional) – Additive mask for the src sequence, defaults to None

  • src_key_padding_mask (torch.Tensor, optional) – Tensor mask for src keys per batch, defaults to None

  • is_causal (bool, optional) – Applies a causal mask as mask and ignores attn_mask for computing scaled dot product attention, defaults to False

forward(src: Tensor, tgt: Tensor, src_mask: Tensor | None = None, tgt_mask: Tensor | None = None, memory_mask: Tensor | None = None, src_key_padding_mask: Tensor | None = None, tgt_key_padding_mask: Tensor | None = None, memory_key_padding_mask: Tensor | None = None, is_causal: bool = False) Tensor

Returns output of neural network.

Parameters:
  • src (torch.Tensor) – Encoder input sequence

  • tgt (torch.Tensor) – Decoder input sequence

  • src_mask (Optional[torch.Tensor], optional) – Additive mask for the src sequence, defaults to None

  • tgt_mask (Optional[torch.Tensor], optional) – Additive mask for the tgt sequence, defaults to None

  • memory_mask (Optional[torch.Tensor], optional) – Additive mask for the encoder output, defaults to None

  • src_key_padding_mask (Optional[torch.Tensor], optional) – Tensor mask for src keys per batch, defaults to None

  • tgt_key_padding_mask (Optional[torch.Tensor], optional) – Tensor mask for tgt keys per batch, defaults to None

  • memory_key_padding_mask (Optional[torch.Tensor], optional) – Tensor mask for memory keys per batch, defaults to None

  • is_causal (bool, optional) – Applies a causal mask as mask and ignores attn_mask for computing scaled dot product attention, defaults to False

generate_square_subsequent_mask(sz)

Returns a square mask for the sequence that prevents the model from looking into the future words when making predictions. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).

Parameters:

sz (int) – Size of mask to generate

recreate_network() None

Recreates neural network.

remove_decoder_layer()

Removes a decoder layer from transformer.

remove_encoder_layer()

Removes an encoder layer from transformer.

remove_node(network: str | None = None, hidden_layer: int | None = None, numb_new_nodes: int | None = None) Dict[str, Any]

Removes nodes from hidden layer of encoder/decoder.

Parameters:
  • network (Optional[str], optional) – Network to remove node from, ‘encoder’ or ‘decoder’, defaults to None

  • hidden_layer (Optional[int], optional) – Depth of hidden layer to remove nodes from, defaults to None

  • numb_new_nodes (Optional[int], optional) – Number of nodes to remove from hidden layer, defaults to None

Returns:

Dictionary containing hidden layer, number of removed nodes and network

Return type:

Dict[str, Any]

class agilerl.modules.bert.PositionalEncoder(emb_size: int, dropout: float, maxlen: int = 5000)

The Positional Encoder class. Adds positional encoding to the token embedding to introduce a notion of word order.

Parameters:
  • emb_size (int) – Number of expected features

  • dropout (float, optional) – Dropout value, defaults to 0.1

  • maxlen (int, optional) – Maximum length of sequence, defaults to 5000

forward(x: Tensor) Tensor

Forward pass through positional encoder. :param x: Input to positional encoder, shape [seq_len, batch_size, embedding_dim] :type x: torch.Tensor

class agilerl.modules.bert.PositionalEncoding(max_positions: int, emb_size: int)

The positional embedding class. Converts tensor of input indices into corresponding tensor of position embeddings.

forward(tokens: Tensor)

Forward pass through position embedding module. :param tokens: Tokens to embed :type tokens: torch.Tensor

class agilerl.modules.bert.TokenEmbedding(vocab_size: int, emb_size: int)

The token embedding class. Converts tensor of input indices into corresponding tensor of token embeddings.

Parameters:
  • vocab_size (int) – Size of the vocabulary

  • emb_size (int) – Size of the embedding

forward(tokens: Tensor) Tensor

Forward pass through token embedding module. :param tokens: Tokens to embed :type tokens: torch.Tensor