OpenAI gpt-oss-20b (New)
OCI Generative AI supports access to the pretrained OpenAI gpt-oss-20b model.
The openai.gpt-oss-20b is an open-weight, text-only language model designed for powerful reasoning and agentic tasks.
Available in These Regions
- Germany Central (Frankfurt)
- Japan Central (Osaka)
- US Midwest (Chicago)
Access this Model
Key Features
- Model Name in OCI
Generative AI:
openai.gpt-oss-20b - Model Size: 21 billion parameters
- Text Mode Only: Input text and get a text output. Images and file inputs such as audio, video, and document files aren't supported.
- Knowledge: Specialized in advanced reasoning and text-based tasks across a wide range of subjects.
- Context Length: 128,000 tokens (maximum prompt + response length is 128,000 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
- Excels at These Use Cases: Because of its training data, this model is especially strong in STEM (science, technology, engineering, and mathematics), coding, and general knowledge. Use for low-latency, on-device use cases, local inference, or rapid iteration that don't require large memory.
- Function Calling: Yes, through the API.
- Has Reasoning: Yes.
- Knowledge Cutoff: June 2024
For key feature details, see the OpenAI gpt-oss documentation.
On-Demand Mode
The OpenAI gpt-oss-20b model is available only in the on-demand mode.
| Model Name | OCI Model Name | Pricing Page Product Name |
|---|---|---|
| OpenAI gpt-oss-20b | openai.gpt-oss-20b |
OpenAI - gpt-oss-20b Prices for:
|
-
You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.
- Low barrier to start using Generative AI.
- Great for experimentation, proof of concept, and model evaluation.
- Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Dynamic Throttling Limit Adjustment for On-Demand Mode
OCI Generative AI dynamically adjusts the request throttling limit for each active tenancy based on model demand and system capacity to optimize resource allocation and ensure fair access.
This adjustment depends on the following factors:
- The current maximum throughput supported by the target model.
- Any unused system capacity at the time of adjustment.
- Each tenancy’s historical throughput usage and any specified override limits set for that tenancy.
Note: Because of dynamic throttling, rate limits are undocumented and can change to meet system-wide demand.
Because of the dynamic throttling limit adjustment, we recommend implementing a back-off strategy, which involves delaying requests after a rejection. Without one, repeated rapid requests can lead to further rejections over time, increased latency, and potential temporary blocking of client by the Generative AI service. By using a back-off strategy, such as an exponential back-off strategy, you can distribute requests more evenly, reduce load, and improve retry success, following industry best practices and enhancing the overall stability and performance of your integration to the service.
Dedicated AI Cluster for the Model
In the preceding region list, regions that aren't marked with (dedicated AI cluster only) have both on-demand and dedicated AI cluster options. For the on-demand option, you don't need clusters and you can reach the model in the Console playground or through the API. Learn about the dedicated mode.
To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. For the cluster unit size that matches this model, see the following table.
| Base Model | Fine-Tuning Cluster | Hosting Cluster | Pricing Page Information | Request Cluster Limit Increase |
|---|---|---|---|---|
|
Not available for fine-tuning |
|
|
|
If you don't have enough cluster limits in your tenancy for hosting the OpenAI gpt-oss-20b model on a dedicated AI cluster, request the limit dedicated-unit-h100-count to increase by 1.
Release and Retirement Dates
| Model | Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
|---|---|---|---|
openai.gpt-oss-20b
|
2025-11-17 | At least one month after the release of the 1st replacement model. | At least 6 months after the release of the 1st replacement model. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 128,000 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.
Tip
For large inputs with difficult problems, set a high value for the maximum output tokens parameter. - Temperature
-
The level of randomness used to generate the output text. Min: 0, Max: 2, Default: 1
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
pa decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setpto 1 to consider all tokens. Default: 1 - Frequency penalty
-
A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output. Set to 0 to disable. Default: 0
- Presence penalty
-
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. Set to 0 to disable. Default: 0