Models¶
- CLIP
- deepke.name_entity_re.multimodal.models.clip.configuration_clip module
- deepke.name_entity_re.multimodal.models.clip.feature_extraction_clip module
- deepke.name_entity_re.multimodal.models.clip.feature_extraction_utils module
- deepke.name_entity_re.multimodal.models.clip.file_utils module
- deepke.name_entity_re.multimodal.models.clip.image_utils module
- deepke.name_entity_re.multimodal.models.clip.modeling_clip module
- deepke.name_entity_re.multimodal.models.clip.processing_clip module
- deepke.name_entity_re.multimodal.models.clip.tokenization_clip module
deepke.name_entity_re.multimodal.models.IFA_model module¶
- class deepke.name_entity_re.multimodal.models.IFA_model.IFANERCRFModel(label_list, args)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input_ids=None, attention_mask=None, token_type_ids=None, labels=None, images=None, aux_imgs=None, rcnn_imgs=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepke.name_entity_re.multimodal.models.modeling_IFA module¶
- deepke.name_entity_re.multimodal.models.modeling_IFA.get_extended_attention_mask(attention_mask: torch.Tensor, input_shape: Tuple[int], device: torch.device) torch.Tensor [source]¶
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
- Parameters
attention_mask (
torch.Tensor
) – Mask with ones indicating tokens to attend to, zeros for tokens to ignore.input_shape (
Tuple[int]
) – The shape of the input to the model.device – (
torch.device
): The device of the input to the model.
- Returns
torch.Tensor
The extended attention mask, with a the same dtype asattention_mask.dtype
.
- deepke.name_entity_re.multimodal.models.modeling_IFA.get_head_mask(head_mask: Optional[torch.Tensor], num_hidden_layers: int, is_attention_chunked: bool = False) torch.Tensor [source]¶
Prepare the head mask if needed.
- Parameters
head_mask (
torch.Tensor
with shape[num_heads]
or[num_hidden_layers x num_heads]
, optional) – The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).num_hidden_layers (
int
) – The number of hidden layers in the model.is_attention_chunked – (
bool
, optional, defaults toFalse
): Whether or not the attentions scores are computed by chunks or not.
- Returns
torch.Tensor
with shape[num_hidden_layers x batch x num_heads x seq_length x seq_length]
or list with[None]
for each layer.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.IFAConfig(**kwargs)[source]¶
Bases:
transformers.configuration_utils.PretrainedConfig
- class deepke.name_entity_re.multimodal.models.modeling_IFA.IFAPreTrainedModel(config: transformers.configuration_utils.PretrainedConfig, *inputs, **kwargs)[source]¶
Bases:
transformers.modeling_utils.PreTrainedModel
- config_class¶
alias of
deepke.name_entity_re.multimodal.models.modeling_IFA.IFAConfig
- base_model_prefix = 'clip'¶
- supports_gradient_checkpointing = True¶
- class deepke.name_entity_re.multimodal.models.modeling_IFA.CLIPVisionEmbeddings(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pixel_values, aux_embeddings=None, rcnn_embeddings=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertEmbeddings(config)[source]¶
Bases:
torch.nn.modules.module.Module
Construct the embeddings from word, position and token_type embeddings.
- forward(input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.CLIPAttention(config)[source]¶
Bases:
torch.nn.modules.module.Module
Multi-headed attention from ‘Attention Is All You Need’ paper
- class deepke.name_entity_re.multimodal.models.modeling_IFA.CLIPMLP(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertSelfAttention(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, attention_mask=None, head_mask=None, output_attentions=False, visual_hidden_state=None, output_qks=None, current_layer=None, past_key_values=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertSelfOutput(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, input_tensor)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertAttention(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, attention_mask=None, head_mask=None, output_attentions=False, visual_hidden_state=None, output_qks=None, current_layer=None, past_key_values=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertIntermediate(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertOutput(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, input_tensor)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.CLIPEncoderLayer(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states: torch.Tensor, output_attentions: bool = False, past_key_values: Optional[torch.Tensor] = None, current_layer: Optional[int] = None, output_qks=None)[source]¶
- Parameters
hidden_states (
torch.FloatTensor
) – input to the layer of shape(seq_len, batch, embed_dim)
attention_mask (
torch.FloatTensor
) – attention mask of size(batch, 1, tgt_len, src_len)
where padding elements are indicated by very large negative values.layer_head_mask (
torch.FloatTensor
) – mask for attention heads in a given layer of size(config.encoder_attention_heads,)
.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertLayer(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states, attention_mask=None, head_mask=None, output_attentions=False, visual_hidden_state=None, output_qks=None, current_layer=None, past_key_values=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.IFAEncoder(vision_config, text_config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(vision_embeds=None, text_embeds=None, attention_mask=None, head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.BertPooler(config)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(hidden_states)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepke.name_entity_re.multimodal.models.modeling_IFA.IFAModel(vision_config, text_config, add_pooling_layer=True)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, pixel_values=None, aux_values=None, rcnn_values=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.