crslab.model.crs.kgsf package

Submodules

KGSF

class crslab.model.crs.kgsf.kgsf.KGSFModel(opt, device, vocab, side_data)[source]

Bases: crslab.model.base.BaseModel

vocab_size

A integer indicating the vocabulary size.

pad_token_idx

A integer indicating the id of padding token.

start_token_idx

A integer indicating the id of start token.

end_token_idx

A integer indicating the id of end token.

token_emb_dim

A integer indicating the dimension of token embedding layer.

pretrain_embedding

A string indicating the path of pretrained embedding.

n_word

A integer indicating the number of words.

n_entity

A integer indicating the number of entities.

pad_word_idx

A integer indicating the id of word padding.

pad_entity_idx

A integer indicating the id of entity padding.

num_bases

A integer indicating the number of bases.

kg_emb_dim

A integer indicating the dimension of kg embedding.

n_heads

A integer indicating the number of heads.

n_layers

A integer indicating the number of layer.

ffn_size

A integer indicating the size of ffn hidden.

dropout

A float indicating the dropout rate.

attention_dropout

A integer indicating the dropout rate of attention layer.

relu_dropout

A integer indicating the dropout rate of relu layer.

learn_positional_embeddings

A boolean indicating if we learn the positional embedding.

embeddings_scale

A boolean indicating if we use the embeddings scale.

reduction

A boolean indicating if we use the reduction.

n_positions

A integer indicating the number of position.

response_truncate = A integer indicating the longest length for response generation.
pretrained_embedding

A string indicating the path of pretrained embedding.

Parameters
  • opt (dict) – A dictionary record the hyper parameters.

  • device (torch.device) – A variable indicating which device to place the data and model.

  • vocab (dict) – A dictionary record the vocabulary information.

  • side_data (dict) – A dictionary record the side data.

_starts(batch_size)[source]

Return bsz start tokens.

build_model()[source]

build model

converse(batch, mode)[source]

calculate loss and prediction of conversation for batch under certain mode

Parameters
  • batch (dict or tuple) – batch data

  • mode (str, optional) – train/valid/test.

forward(batch, stage, mode)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

freeze_parameters()[source]
pretrain_infomax(batch)[source]

words: (batch_size, word_length) entity_labels: (batch_size, n_entity)

recommend(batch, mode)[source]

context_entities: (batch_size, entity_length) context_words: (batch_size, word_length) movie: (batch_size)

class crslab.model.crs.kgsf.modules.GateLayer(input_dim)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input1, input2)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.crs.kgsf.modules.TransformerDecoderKG(n_heads, n_layers, embedding_size, ffn_size, vocabulary_size, embedding, dropout=0.0, attention_dropout=0.0, relu_dropout=0.0, embeddings_scale=True, learn_positional_embeddings=False, padding_idx=None, n_positions=1024)[source]

Bases: torch.nn.modules.module.Module

Transformer Decoder layer.

Parameters
  • n_heads (int) – the number of multihead attention heads.

  • n_layers (int) – number of transformer layers.

  • embedding_size (int) – the embedding sizes. Must be a multiple of n_heads.

  • ffn_size (int) – the size of the hidden layer in the FFN

  • embedding – an embedding matrix for the bottom layer of the transformer. If none, one is created for this encoder.

  • dropout (float) – Dropout used around embeddings and before layer layer normalizations. This is used in Vaswani 2017 and works well on large datasets.

  • attention_dropout (float) – Dropout performed after the multhead attention softmax. This is not used in Vaswani 2017.

  • relu_dropout (float) – Dropout used after the ReLU in the FFN. Not used in Vaswani 2017, but used in Tensor2Tensor.

  • padding_idx (int) – Reserved padding index in the embeddings matrix.

  • learn_positional_embeddings (bool) – If off, sinusoidal embeddings are used. If on, position embeddings are learned from scratch.

  • embeddings_scale (bool) – Scale embeddings relative to their dimensionality. Found useful in fairseq.

  • n_positions (int) – Size of the position embeddings matrix.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, encoder_state, kg_encoder_output, kg_encoder_mask, db_encoder_output, db_encoder_mask, incr_state=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class crslab.model.crs.kgsf.modules.TransformerDecoderLayerKG(n_heads, embedding_size, ffn_size, attention_dropout=0.0, relu_dropout=0.0, dropout=0.0)[source]

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, encoder_output, encoder_mask, kg_encoder_output, kg_encoder_mask, db_encoder_output, db_encoder_mask)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Module contents