Transformer XL¶
Model from Keita Kurita. Not useable https://github.com/keitakurita/Practical_NLP_in_PyTorch/blob/master/deep_dives/transformer_xl_from_scratch.ipynb
-
class
flood_forecast.transformer_xl.transformer_xl.
MultiHeadAttention
(d_input: int, d_inner: int, n_heads: int = 4, dropout: float = 0.1, dropouta: float = 0.0)[source]¶ -
__init__
(d_input: int, d_inner: int, n_heads: int = 4, dropout: float = 0.1, dropouta: float = 0.0)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input_: torch.FloatTensor, pos_embs: torch.FloatTensor, memory: torch.FloatTensor, u: torch.FloatTensor, v: torch.FloatTensor, mask: Optional[torch.FloatTensor] = None)[source]¶ - pos_embs: we pass the positional embeddings in separately
because we need to handle relative positions
input shape: (seq, bs, self.d_input) pos_embs shape: (seq + prev_seq, bs, self.d_input) output shape: (seq, bs, self.d_input)
-
training
: bool¶
-
-
class
flood_forecast.transformer_xl.transformer_xl.
PositionwiseFF
(d_input, d_inner, dropout)[source]¶ -
__init__
(d_input, d_inner, dropout)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input_: torch.FloatTensor) → torch.FloatTensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
flood_forecast.transformer_xl.transformer_xl.
DecoderBlock
(n_heads, d_input, d_head_inner, d_ff_inner, dropout, dropouta=0.0)[source]¶ -
__init__
(n_heads, d_input, d_head_inner, d_ff_inner, dropout, dropouta=0.0)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input_: torch.FloatTensor, pos_embs: torch.FloatTensor, u: torch.FloatTensor, v: torch.FloatTensor, mask=None, mems=None)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
flood_forecast.transformer_xl.transformer_xl.
PositionalEmbedding
(d)[source]¶ -
-
forward
(positions: torch.LongTensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
flood_forecast.transformer_xl.transformer_xl.
StandardWordEmbedding
(num_embeddings, embedding_dim, div_val=1, sample_softmax=False)[source]¶ -
__init__
(num_embeddings, embedding_dim, div_val=1, sample_softmax=False)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
forward
(input_: torch.LongTensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
flood_forecast.transformer_xl.transformer_xl.
TransformerXL
(num_embeddings, n_layers, n_heads, d_model, d_head_inner, d_ff_inner, dropout=0.1, dropouta=0.0, seq_len: int = 0, mem_len: int = 0)[source]¶ -
__init__
(num_embeddings, n_layers, n_heads, d_model, d_head_inner, d_ff_inner, dropout=0.1, dropouta=0.0, seq_len: int = 0, mem_len: int = 0)[source]¶ Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
update_memory
(previous_memory: List[torch.FloatTensor], hidden_states: List[torch.FloatTensor])[source]¶
-
forward
(idxs: torch.LongTensor, target: torch.LongTensor, memory: Optional[List[torch.FloatTensor]] = None) → Dict[str, torch.Tensor][source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-