xAI Grok 4 Fast (New)

The xAI Grok 4 Fast model is a speed- and cost-optimized version of the xAI Grok 4 model. Similar to Grok 4, this model excels at enterprise use cases such as data extraction, coding, and summarizing text, and has a deep domain knowledge in finance, healthcare, law, and science.

Grok 4 Fast is faster than Grok 4, with a rapid time-to-first-token and high output speed. This prioritization of speed makes this model ideal for real-time applications.

Available in These Regions

  • US East (Ashburn) (on-demand only)
  • US Midwest (Chicago) (on-demand only)
  • US West (Phoenix) (on-demand only)
Important

External Calls

The xAI Grok models are hosted in an OCI data center, in a tenancy provisioned for xAI. The xAI Grok models, which can be accessed through the OCI Generative AI service, are managed by xAI.

Overview

The xAI Grok 4 Fast model comes in two modes offered in two separate models. A Reasoning model and a Non‑Reasoning model. See the following table to help you decide which model to select.

Mode Model Name How it works When to Use
Reasoning xai.grok-4-fast-reasoning Generates thinking tokens for step‑by‑step chain‑of‑thought analysis, delivering deeper logical reasoning with more latency (less than previous reasoning models). Complex, multi‑step problems that need careful, analytical solutions.
Non-Reasoning xai.grok-4-fast-non-reasoning Skips the thinking tokens phase and returns instant, pattern‑matched answers. Simple, straightforward queries where speed is the priority.

Key Features

  • Model names in OCI Generative AI:
    • xai.grok-4-fast-reasoning
    • xai.grok-4-fast-non-reasoning
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Multimodal support: Input text and images and get a text output.
  • Knowledge: Has a deep domain knowledge in finance, healthcare, law, and science.
  • Context Length: 2 million tokens (maximum prompt + response length is 2 million tokens for keeping the context). In the playground, the response length is capped at 16,000 tokens for each run, but the context remains 2 million.
  • Modes: Operates in two modes: "reasoning" for complex tasks and "non-reasoning" for speed-critical, straightforward requests.
  • Function Calling: Yes, through the API.
  • Structured Outputs: Yes.
  • Cached Input Tokens: Yes

    Important note: The cached‑input feature is available in both the playground and the API. However, that information can only be retrieved through the API.

  • Knowledge Cutoff: Not available

Limits

Image Inputs
  • Console: Upload one or more .png or .jpg images, each 5 MB or smaller.
  • API: Only JPG/JPEG and PNG file formats are supported. Submit a base64 encoded version of an image, ensuring that each converted image is more than 256 and less than 1,792 tokens. For example, a 512 x 512 image typically converts to around 1,610 tokens. There's no stated maximum number of images that can be uploaded. The combined token count for both text and images must be within the model's overall context window of 2 million tokens.

On-Demand Mode

Note

The Grok models are available only in the on-demand mode.
Model Name OCI Model Name Pricing Page Product Name
xAI Grok 4 Fast
  • xai.grok-4-fast-reasoning
  • xai.grok-4-fast-non-reasoning
xAI - Grok 4 Fast
Prices are listed for:
  • Input Tokens
  • Output Tokens
  • Cached Input Tokens
You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated. Here are key features for the on-demand mode:
  • You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.

  • Low barrier to start using Generative AI.
  • Great for experimentation, proof of concept, and model evaluation.
  • Available for the pretrained models in regions not listed as (dedicated AI cluster only).

Release Date

Model General Availability Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
xai.grok-4-fast-reasoning 2025-10-10 Tentative This model isn't available for the dedicated mode.
xai.grok-4-fast-non-reasoning 2025-10-10 Tentative This model isn't available for the dedicated mode.
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.

Tip

For large inputs with difficult problems, set a high value for the maximum output tokens parameter. See Troubleshooting.
Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Troubleshooting

Issue: The Grok 4 Fast model doesn't respond.

Cause: The Maximum output tokens parameter in the playground or the max_tokens parameter in the API is likely too low. For example, by default this parameter is set to 600 tokens in the playground which might be low for complex tasks.

Action: Increase the maximum output tokens parameter.