models module

class models.CharRNNDecoder(embedding, emb_dim, hidden_dim, tone_vowel_dim, tone_vowel_input, vocab_dim, n_layers=1)

Bases: torch.nn.modules.module.Module

Character-level recurrent decoder cell.

Args:
  • embedding: character embedding lookup variable, requires_grad.
  • emb_dim: the number of expected dimension for character embedding (plus number of tone and vowel dim if
  • using vowel and tone information).
  • hidden_dim: the number of expected dimension for encoding a summary of a sequence of chars,
  • vocab_dim: the number of vocabulary in corpus.
  • n_layers: the number of expected layers for GRU. Default: 1.
inputs: input, hidden, tone_embedded(depends), tone_embedded, vowel_embedded(depends)
  • input: the length of (batch * sentence_length) 1D array containing char index over the char sequence c_t
  • hidden (n_layers, batch * sentence_length, hidden_dim): tensor containing the hidden state at char

sequence c_t. - tone_embedded (1,batch * sentence_length,tone_emb_dim): tensor containing the embedding tone info at char sequence c_t. - vowel_embedded (1,batch * sentence_length,tone_emb_dim): tensor containing the embedding vowel info at char sequence c_t.

outputs: output, h’
-h’ (n_layers, batch * sentence_length, hidden_dim): tensor containing the hidden state at next char sequence c_t+1.
notes:
  • input is transformed to char_embedding by its indexes with (1, batch * sentence_length, char_emb_dim) in

forward function.

argmax_logits(input)
forward(input, hidden, tone_embedded=None, vowel_embedded=None)
class models.CharRNNEncoder(embedding, char_emb_dim, hidden_dim, n_layers=1)

Bases: torch.nn.modules.module.Module

Character-level recurrent encoder cell.

Args:
  • embedding: character embedding lookup variable, requires_grad.
  • char_emb_dim: the number of expected dimension for character embedding.
  • hidden_dim: the number of expected dimension for encoding a summary of a sequence of chars,
  • n_layers: the number of expected layers for GRU. Default: 1.
inputs: input, hidden
  • input: the length of (batch * sentence_length) 1D array containing char index over the char sequence c_t
  • hidden (n_layers, batch * sentence_length, hidden_dim): tensor containing the hidden state at char

sequence c_t.

outputs: output, h’
  • h’ (n_layers, batch * sentence_length, hidden_dim): tensor containing the hidden state at next char

sequence c_t+1.

notes:
  • input is transformed to char_embedding by its indexes with (1, batch * sentence_length, char_emb_dim) in

forward function.

forward(input, hidden)
class models.FirstsentenceRNNEncoder(hidden_size)

Bases: torch.nn.modules.module.Module

First sentence-level recurrent encoder cell.

Args:
  • hidden_dim: the number of expected dimension for encoding a summary of a sequence of sentences.
inputs: input, hidden
  • sentence_input: the length of (1, batch, sentence_emb_dim) tensor containing a batch of sentences summary

over the sentence sequence s_t’. - hidden (1, batch, hidden_dim): tensor containing the hidden state at sentence sequence s_t’.

outputs: output, h’
  • h’ (1, batch, hidden_dim): tensor containing the hidden state at next time sequence st’+1.
notes:
  • s_t’: denotes backward time sequence at s_t’ which inputs is a reverse sequence list from sentence-level

recurrent encoder cell.

forward(sentence_input, hidden)
class models.SentenceRNNEncoder(sentence_emb_dim, hidden_dim)

Bases: torch.nn.modules.module.Module

sentence-level recurrent encoder cell.

Args:
  • sentence_emb_dim: the number of expected dimension for sentence encoding.
  • hidden_dim: the number of expected dimension for encoding a summary of a sequence of sentences.
inputs: input, hidden
  • input: the length of (1, batch, sentence_emb_dim) tensor containing a batch of sentences summary over the

sentence sequence s_t. - hidden (1, batch, hidden_dim): tensor containing the hidden state at sentence sequence s_t.

outputs: output, h’
  • h’ (1, batch, hidden_dim): tensor containing the hidden state at next sentence sequence st+1.
forward(sentence_inputs, hidden)
class models.VariationalInference(sentence_emb_dim, latent_dim)

Bases: torch.nn.modules.module.Module

Variational Inference cell.

Args:
  • sentence_emb_dim: the number of expected dimension for sentence encoding.
  • latent_dim: the number of expected dimension for the latent space of a sequence of sentences.
inputs: input
  • input: the length of (batch, sentence_emb_dim) tensor containing a batch of sentences summary over the

sentence sequence s_t.

outputs: decoder_input, z, mu, logvar
  • decoder_input (batch, hidden_dim): tensor containing the decoder input(hidden state) at current sentence

sequence st. - z (batch, latent_dim): tensor containing latent state at at current sentence sequence st. - mu (batch, latent_dim): tensor containing means at current sentence sequence st. - logvar (batch, latent_dim): tensor containing log(variances) at current sentence sequence st.

decode(z)
draw_z(batch_size)

generating initial z from the Gaussian distribution (0,I) :param batch_size: :return:

encode(x)
forward(x)
reparameterize(mu, logvar)