Supported Mistral Model

You can import large language models from Hugging Face and models imported from an OCI Object Storage bucket into OCI Generative AI, create endpoints for those models, and use them in the Generative AI service.

These models have a high-performance, decoder-only Transformer architecture featuring Sliding Window Attention (SWA) for efficient long-context handling and optional Grouped Query Attention (GQA) for improved scalability. For more details, see Mistral in the Hugging Face documentation.

Mistral

Supported Mistral Model
Hugging Face Model ID Model Capability Recommended Dedicated AI Cluster Unit Shape
intfloat/e5-mistral-7b-instruct EMBEDDING A10_X1
Note

  • To import a fine-tuned version of a model, only fine-tuned models that use the same transformers version as the original model and have a parameter count within ±10% of the original are supported.
  • If the instance type for the recommended unit shape isn’t available in your region, select a higher-tier instance (for example, select an H100 shape instead of an A100-80G shape).
  • For prerequisites and how to import models, see Managing Imported Models (New).