Module¶
deepke.attribution_extraction.standard.module.Attention module¶
- class deepke.attribution_extraction.standard.module.Attention.DotAttention(dropout=0.0)[source]¶
Bases:
torch.nn.modules.module.Module
- class deepke.attribution_extraction.standard.module.Attention.MultiHeadAttention(embed_dim, num_heads, dropout=0.0, output_attentions=True)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(Q, K, V, key_padding_mask=None, attention_mask=None, head_mask=None)[source]¶
- Parameters
Q – [B, L, Hs]
K – [B, S, Hs]
V – [B, S, Hs]
key_padding_mask – [B, S] 为 1/True 的地方需要 mask
attention_mask – [S] / [L, S] 指定位置 mask 掉, 为 1/True 的地方需要 mask
head_mask – [N] 指定 head mask 掉, 为 1/True 的地方需要 mask
deepke.attribution_extraction.standard.module.CNN module¶
- class deepke.attribution_extraction.standard.module.CNN.GELU[source]¶
Bases:
torch.nn.modules.module.Module
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.attribution_extraction.standard.module.CNN.CNN(config)[source]¶
Bases:
torch.nn.modules.module.Module
nlp 里为了保证输出的句长 = 输入的句长,一般使用奇数 kernel_size,如 [3, 5, 7, 9] 当然也可以不等长输出,keep_length 设为 False 此时,padding = k // 2 stride 一般为 1
deepke.attribution_extraction.standard.module.Capsule module¶
deepke.attribution_extraction.standard.module.Embedding module¶
- class deepke.attribution_extraction.standard.module.Embedding.Embedding(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(*x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepke.attribution_extraction.standard.module.GCN module¶
- class deepke.attribution_extraction.standard.module.GCN.GCN(cfg)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(x, adj)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepke.attribution_extraction.standard.module.RNN module¶
- class deepke.attribution_extraction.standard.module.RNN.RNN(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(x, x_len)[source]¶
- Parameters
[batch_size (torch.Tensor) –
seq_max_length –
input_size] –
[B –
L –
一般是经过embedding后的值 (H_in]) –
x_len – torch.Tensor [L] 已经排好序的句长值
- Returns
torch.Tensor [B, L, H_out] 序列标注的使用结果 hn: torch.Tensor [B, N, H_out] / [B, H_out] 分类的结果,当 last_layer_hn 时只有最后一层结果
- Return type
output
deepke.attribution_extraction.standard.module.Transformer module¶
- deepke.attribution_extraction.standard.module.Transformer.gelu(x)[source]¶
Original Implementation of the gelu activation function in Google Bert repo when initially created. For information: OpenAI GPT’s gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) Also see https://arxiv.org/abs/1606.08415
- deepke.attribution_extraction.standard.module.Transformer.gelu_new(x)[source]¶
Implementation of the gelu activation function currently in Google Bert repo (identical to OpenAI GPT). Also see https://arxiv.org/abs/1606.08415
- class deepke.attribution_extraction.standard.module.Transformer.TransformerAttention(config)[source]¶
Bases:
torch.nn.modules.module.Module
- class deepke.attribution_extraction.standard.module.Transformer.TransformerOutput(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input_tensor)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.attribution_extraction.standard.module.Transformer.TransformerLayer(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, key_padding_mask=None, attention_mask=None, head_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.