Optimum RBLN¶
Optimum RBLN은 허깅페이스 transformers
와 diffusers
모델들이 RBLN NPU, 즉 ATOM (RBLN-CA02
) 및 ATOM+ (RBLN-CA12
)에서 실행 될 수 있도록 연결해주는 라이브러리 입니다. 이를 통해 허깅페이스 모델들의 다양한 다운스트림 태스크들을 단일 및 다중 NPU (Rebellions Scalable Design)에서 손쉽게 컴파일 및 추론 할 수 있습니다. 다음 표에 현재 Optimum RBLN이 지원하는 모델 목록이 나열되어 있습니다.
Transformers¶
Single NPU¶
Model | Dataset | Task |
---|---|---|
Phi-2 | 250B tokens, combination of NLP synthetic data created by AIOAI GPT-3.5 | Text Generation |
Gemma-2b | 6 trillion tokens of web, code, and mathematics text | Text Generation |
GPT2 | WebText | Text Generation |
GPT2-medium | WebText | Text Generation |
GPT2-large | WebText | Text Generation |
GPT2-xl | WebText | Text Generation |
T5-small | Colossal Clean Crawled Corpus | Text Generation |
T5-base | Colossal Clean Crawled Corpus | Text Generation |
T5-large | Colossal Clean Crawled Corpus | Text Generation |
T5-3b | Colossal Clean Crawled Corpus | Text Generation |
BART-base | BookCorpus + etc. | Text Generation |
BART-large | BookCorpus + etc. | Text Generation |
KoBART-base | Korean Wiki | Text Generation |
E5-base-4K | Colossal Clean text Pairs | Embedding Retrieval |
BERT-base | - BookCorpus & English Wikipedia - SQuAD v2 |
Masked Langague Modeling |
BERT-large | - BookCorpus & English Wikipedia - SQuAD v2 |
Masked Langague Modeling |
DistilBERT-base | - BookCorpus & English Wikipedia - SQuAD v2 |
Question Answering |
SecureBERT | a manually crafted dataset from the human readable descriptions of MITRE ATT&CK techniques and tactics | Masked Langague Modeling |
RoBERTa | a manually crafted dataset from the human readable descriptions of MITRE ATT&CK techniques and tactics | Text Classification |
BGE-M3 | MLDR and bge-m3-data | Embedding Retrieval |
BGE-Reranker-V2-M3 | MLDR and bge-m3-data | Embedding Retrieval |
BGE-Reranker-Base | MLDR and bge-m3-data | Embedding Retrieval |
BGE-Reranker-Large | MLDR and bge-m3-data | Embedding Retrieval |
Whisper-tiny | 680k hours of labeled data from the web | Speech to Text |
Whisper-base | 680k hours of labeled data from the web | Speech to Text |
Whisper-small | 680k hours of labeled data from the web | Speech to Text |
Whisper-medium | 680k hours of labeled data from the web | Speech to Text |
Whisper-large-v3 | 680k hours of labeled data from the web | Speech to Text |
Whisper-large-v3-turbo | 680k hours of labeled data from the web | Speech to Text |
Wav2Vec2 | Librispeech | Speech to Text |
Audio-Spectogram-Transformer | AudioSet | Audio Classification |
DPT-large | MIX 6 | Monocular Depth Estimation |
ViT-large | ImageNet-21k & ImageNet | Image Classification |
ResNet50 | ILSVRC2012 | Image Classification |
Multi-NPU¶
Note
다중 NPU 기능은 ATOM+ (RBLN-CA12
)에서만 지원됩니다. 지금 사용 중인 NPU의 종류는 rbln-stat
명령어로 확인할 수 있습니다.
Model | Dataset | Recommended # of NPUs | Task |
---|---|---|---|
Llama3-8b | A new mix of publicly available online data | 4 | Text Generation |
Llama2-7b | A new mix of publicly available online data | 4 | Text Generation |
Llama2-13b | A new mix of publicly available online data | 8 | Text Generation |
Gemma-7b | 6 trillion tokens of web, code, and mathematics text | 4 | Text Generation |
Mistral-7b | Publicly available online data | 4 | Text Generation |
Qwen2-7b | 7T tokens of internal data | 4 | Text Generation |
Salamandra-7b | 2.4T tokens of 35 European languages and 92 programming languages | 4 | Text Generation |
EXAONE-3.0-7.8b | 8T tokens of curated English and Korean data | 4 | Text Generation |
Mi:dm-7b | AI-HUB/the National Institute of Korean Language | 4 | Text Generation |
SOLAR-10.7b | alpaca-gpt4-data + etc. | 8 | Text Generation |
EEVE-Korean-10.8b | Korean-translated ver. of Open-Orca/SlimOrca-Dedup and argilla/ultrafeedback-binarized-preferences-cleaned | 8 | Text Generation |
Llava-v1.6-mistral-7b | - | 4 | Image Captioning |
Diffusers¶
Model | Dataset | Task |
---|---|---|
Stable Diffusion v1.5 | LAION-2B | |
Stable Diffusion XL | - | |
SDXL-turbo | - | |
ControlNet | - |