콘텐츠로 이동

Idefics3

Idefics3 모델은 이미지와 텍스트 입력의 임의의 시퀀스를 받아 텍스트 출력을 생성하는 오픈 멀티모달 모델입니다. 이 모델은 이미지에 대한 질문에 답하거나, 시각적 콘텐츠를 설명하고, 여러 이미지를 기반으로 이야기를 만들거나, 시각적 입력 없이 순수한 언어 모델로 작동할 수 있습니다. RBLN NPU는 Optimum RBLN을 사용하여 Idefics3 모델 추론을 가속화할 수 있습니다.

Classes

RBLNIdefics3ForConditionalGeneration

Bases: RBLNModel

RBLNIdefics3ForConditionalGeneration is a multi-modal model that integrates vision and language processing capabilities, optimized for RBLN NPUs. It is designed for conditional generation tasks that involve both image and text inputs.

This model inherits from [RBLNModel]. Check the superclass documentation for the generic methods the library implements for all its models.

Important Note

This model includes a Large Language Model (LLM) as a submodule. For optimal performance, it is highly recommended to use tensor parallelism for the language model. This can be achieved by using the rbln_config parameter in the from_pretrained method. Refer to the from_pretrained documentation and the RBLNIdefics3ForConditionalGenerationConfig class for details.

Examples:

from optimum.rbln import RBLNIdefics3ForConditionalGeneration

model = RBLNIdefics3ForConditionalGeneration.from_pretrained(
    "HuggingFaceM4/idefics3-8b",
    export=True,
    rbln_config={
        "vision_model": {
            "device": 0,
        },
        "text_model": {
            "batch_size": 1,
            "max_seq_len": 131_072,
            "tensor_parallel_size": 8,
            "use_inputs_embeds": True,
            "attn_impl": "flash_attn",
            "kvcache_partition_len": 16_384,
            "device": [0, 1, 2, 3, 4, 5, 6, 7],
        },
    },
)

model.save_pretrained("compiled-idefics3-8b")

Functions

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNIdefics3ForConditionalGenerationConfig

Bases: RBLNModelConfig

Configuration class for RBLNIdefics3ForConditionalGeneration models.

This class extends RBLNModelConfig to include settings specific to the Idefics3 vision-language model optimized for RBLN devices. It allows configuration of the batch size and separate configurations for the vision and text submodules.

Attributes:

Name Type Description
submodules List[str]

List of submodules included in the model. Defaults to ["vision_model", "text_model"].

Functions

__init__(batch_size=None, vision_model=None, text_model=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1 if not specified.

None
vision_model Optional[RBLNModelConfig]

Configuration for the vision transformer component. This can include settings specific to the vision encoder, such as input resolution or other vision-related parameters. If not provided, default settings will be used.

None
text_model Optional[RBLNModelConfig]

Configuration for the text model component. This can include settings specific to the language model, such as tensor parallelism or other text-related parameters. If not provided, default settings will be used.

None
**kwargs Dict[str, Any]

Additional keyword arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If batch_size is provided but is not a positive integer.