crslab.model.utils.modules package

Submodules

class crslab.model.utils.modules.attention.SelfAttentionBatch(dim, da, alpha=0.2, dropout=0.5)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(h)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.utils.modules.attention.SelfAttentionSeq(dim, da, alpha=0.2, dropout=0.5)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(h, mask=None, return_logits=False)[source]

For the padding tokens, its corresponding mask is True if mask==[1, 1, 1, …]

class crslab.model.utils.modules.transformer.MultiHeadAttention(n_heads, dim, dropout=0.0)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(query, key=None, value=None, mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.utils.modules.transformer.TransformerDecoder(n_heads, n_layers, embedding_size, ffn_size, vocabulary_size, embedding=None, dropout=0.0, attention_dropout=0.0, relu_dropout=0.0, embeddings_scale=True, learn_positional_embeddings=False, padding_idx=None, n_positions=1024)[source]

Bases: torch.nn.modules.module.Module

Transformer Decoder layer.

Parameters
  • n_heads (int) – the number of multihead attention heads.

  • n_layers (int) – number of transformer layers.

  • embedding_size (int) – the embedding sizes. Must be a multiple of n_heads.

  • ffn_size (int) – the size of the hidden layer in the FFN

  • embedding – an embedding matrix for the bottom layer of the transformer. If none, one is created for this encoder.

  • dropout (float) – Dropout used around embeddings and before layer layer normalizations. This is used in Vaswani 2017 and works well on large datasets.

  • attention_dropout (float) – Dropout performed after the multhead attention softmax. This is not used in Vaswani 2017.

  • padding_idx (int) – Reserved padding index in the embeddings matrix.

  • learn_positional_embeddings (bool) – If off, sinusoidal embeddings are used. If on, position embeddings are learned from scratch.

  • embeddings_scale (bool) – Scale embeddings relative to their dimensionality. Found useful in fairseq.

  • n_positions (int) – Size of the position embeddings matrix.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, encoder_state, incr_state=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.utils.modules.transformer.TransformerDecoderLayer(n_heads, embedding_size, ffn_size, attention_dropout=0.0, relu_dropout=0.0, dropout=0.0)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, encoder_output, encoder_mask)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.utils.modules.transformer.TransformerEncoder(n_heads, n_layers, embedding_size, ffn_size, vocabulary_size, embedding=None, dropout=0.0, attention_dropout=0.0, relu_dropout=0.0, padding_idx=0, learn_positional_embeddings=False, embeddings_scale=False, reduction=True, n_positions=1024)[source]

Bases: torch.nn.modules.module.Module

Transformer encoder module.

Parameters
  • n_heads (int) – the number of multihead attention heads.

  • n_layers (int) – number of transformer layers.

  • embedding_size (int) – the embedding sizes. Must be a multiple of n_heads.

  • ffn_size (int) – the size of the hidden layer in the FFN

  • embedding – an embedding matrix for the bottom layer of the transformer. If none, one is created for this encoder.

  • dropout (float) – Dropout used around embeddings and before layer layer normalizations. This is used in Vaswani 2017 and works well on large datasets.

  • attention_dropout (float) – Dropout performed after the multhead attention softmax. This is not used in Vaswani 2017.

  • relu_dropout (float) – Dropout used after the ReLU in the FFN. Not used in Vaswani 2017, but used in Tensor2Tensor.

  • padding_idx (int) – Reserved padding index in the embeddings matrix.

  • learn_positional_embeddings (bool) – If off, sinusoidal embeddings are used. If on, position embeddings are learned from scratch.

  • embeddings_scale (bool) – Scale embeddings relative to their dimensionality. Found useful in fairseq.

  • reduction (bool) – If true, returns the mean vector for the entire encoding sequence.

  • n_positions (int) – Size of the position embeddings matrix.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input)[source]

input data is a FloatTensor of shape [batch, seq_len, dim] mask is a ByteTensor of shape [batch, seq_len], filled with 1 when inside the sequence and 0 outside.

class crslab.model.utils.modules.transformer.TransformerEncoderLayer(n_heads, embedding_size, ffn_size, attention_dropout=0.0, relu_dropout=0.0, dropout=0.0)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(tensor, mask)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.utils.modules.transformer.TransformerFFN(dim, dim_hidden, relu_dropout=0.0)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

crslab.model.utils.modules.transformer._normalize(tensor, norm_layer)[source]

Broadcast layer norm

crslab.model.utils.modules.transformer.create_position_codes(n_pos, dim, out)[source]
crslab.model.utils.modules.transformer.neginf(dtype)[source]

Returns a representable finite number near -inf for a dtype.

Module contents