Grounding DINO¶
Grounding DINO 모델은 텍스트 인코더를 통해 폐쇄형 객체 탐지 모델을 확장하여 개방형 객체 탐지를 가능하게 합니다. RBLN NPU는 Optimum RBLN을 사용하여 Grounding DINO 모델 추론을 가속화할 수 있습니다.
API Reference¶
Classes¶
RBLNGroundingDinoForObjectDetection
¶
Bases: RBLNModel
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
forward(pixel_values, input_ids, token_type_ids=None, attention_mask=None, pixel_mask=None, encoder_outputs=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs)
¶
Forward pass for the RBLN-optimized GroundingDinoForObjectDetection model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_values
|
torch.Tensor of shape (batch_size, num_channels, image_size, image_size)
|
The tensors corresponding to the input images. |
required |
input_ids
|
torch.LongTensor of shape (batch_size, text_sequence_length)
|
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. |
required |
token_type_ids
|
torch.LongTensor of shape (batch_size, text_sequence_length)
|
Segment token indices to indicate first and second portions of the inputs. |
None
|
attention_mask
|
torch.Tensor of shape (batch_size, sequence_length)
|
Mask to avoid performing attention on padding token indices. |
None
|
pixel_mask
|
torch.Tensor of shape (batch_size, height, width)
|
Mask to avoid performing attention on padding pixel values. |
None
|
encoder_outputs
|
Tuple consists of last_hidden_state of shape(batch_size, sequence_length, hidden_size)
|
A sequence of hidden-states at the output of the last layer of the encoder. |
None
|
output_attentions
|
bool
|
Whether or not to return the attentions tensors of all attention layers. |
None
|
output_hidden_states
|
bool
|
Whether or not to return the hidden states of all layers. |
None
|
return_dict
|
bool
|
Whether or not to return a ModelOutput instead of a plain tuple. |
None
|
Returns:
| Type | Description |
|---|---|
Union[GroundingDinoObjectDetectionOutput, Tuple]
|
The model outputs. If return_dict=False is passed, returns a tuple of tensors. Otherwise, returns a GroundingDinoObjectDetectionOutput object. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
RBLNGroundingDinoEncoder
¶
Bases: RBLNModel
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
RBLNGroundingDinoDecoder
¶
Bases: RBLNModel
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
Classes¶
RBLNGroundingDinoForObjectDetectionConfig
¶
Functions¶
__init__(batch_size=None, encoder=None, decoder=None, text_backbone=None, backbone=None, output_attentions=None, output_hidden_states=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size
|
Optional[int]
|
The batch size for image and text processing. Defaults to 1. |
None
|
encoder
|
Optional[RBLNModelConfig]
|
The encoder configuration. Defaults to None. |
None
|
decoder
|
Optional[RBLNModelConfig]
|
The decoder configuration. Defaults to None. |
None
|
text_backbone
|
Optional[RBLNModelConfig]
|
The text backbone configuration. Defaults to None. |
None
|
backbone
|
Optional[RBLNModelConfig]
|
The backbone configuration. Defaults to None. |
None
|
output_attentions
|
Optional[bool]
|
Whether to output attentions. Defaults to None. |
None
|
output_hidden_states
|
Optional[bool]
|
Whether to output hidden states. Defaults to None. |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If batch_size is not a positive integer. |