| Annotation |
An annotation attached to the assistant’s message, used to represent additional metadata such as
citations.
|
| Annotation.Builder |
|
| ApplyGuardrailsDetails |
Details for applying guardrails to the input text.
|
| ApplyGuardrailsDetails.Builder |
|
| ApplyGuardrailsResult |
The result of applying guardrails to the input text.
|
| ApplyGuardrailsResult.Builder |
|
| ApproximateLocation |
To refine search results based on geography, you can specify an approximate user location using
any of the following: - city and region are free-text strings, like “Minneapolis”
and “Minnesota”.
|
| ApproximateLocation.Builder |
|
| AssistantMessage |
Represents a single instance of assistant message.
|
| AssistantMessage.Builder |
|
| AudioContent |
Represents a single instance of chat audio content.
|
| AudioContent.Builder |
|
| AudioUrl |
Provide a base64 encoded audio or an audio uri if it’s supported.
|
| AudioUrl.Builder |
|
| AudioUrl.Detail |
The default value is AUTO and only AUTO is supported.
|
| BaseChatRequest |
The base class to use for the chat inference request.
|
| BaseChatRequest.ApiFormat |
The API format for the model’s family group.
|
| BaseChatResponse |
The base class that creates the chat response.
|
| BaseChatResponse.ApiFormat |
The API format for the model’s response.
|
| CategoryScore |
A category with its score.
|
| CategoryScore.Builder |
|
| ChatChoice |
Represents a single instance of the chat response.
|
| ChatChoice.Builder |
|
| ChatContent |
The base class for the chat content.
|
| ChatContent.Type |
The type of the content.
|
| ChatDetails |
Details of the conversation for the model to respond.
|
| ChatDetails.Builder |
|
| ChatResult |
The response to the chat conversation.
|
| ChatResult.Builder |
|
| Choice |
Represents a single instance of the generated text.
|
| Choice.Builder |
|
| Citation |
A section of the generated response which cites the documents that were used for generating the
response.
|
| Citation.Builder |
|
| CohereChatBotMessage |
A message that represents a single chat dialog as CHATBOT role.
|
| CohereChatBotMessage.Builder |
|
| CohereChatRequest |
Details for the chat request for Cohere models.
|
| CohereChatRequest.Builder |
|
| CohereChatRequest.CitationQuality |
When FAST is selected, citations are generated at the same time as the text output and the
request will be completed sooner.
|
| CohereChatRequest.PromptTruncation |
Defaults to OFF.
|
| CohereChatRequest.SafetyMode |
Safety mode: Adds a safety instruction for the model to use when generating responses.
|
| CohereChatResponse |
The response to the chat conversation.
|
| CohereChatResponse.Builder |
|
| CohereChatResponse.FinishReason |
Why the generation stopped.
|
| CohereLlmInferenceRequest |
Details for the text generation request for Cohere models.
|
| CohereLlmInferenceRequest.Builder |
|
| CohereLlmInferenceRequest.ReturnLikelihoods |
Specifies how and if the token likelihoods are returned with the response.
|
| CohereLlmInferenceRequest.Truncate |
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
| CohereLlmInferenceResponse |
The generated text result to return.
|
| CohereLlmInferenceResponse.Builder |
|
| CohereMessage |
A message that represents a single chat dialog.
|
| CohereMessage.Role |
To identify who the message is coming from, a role is associated to each message.
|
| CohereParameterDefinition |
A definition of tool parameter.
|
| CohereParameterDefinition.Builder |
|
| CohereResponseFormat |
Specify the format the model output is guaranteed to be of
Note: Objects should always be created or deserialized using the Builder.
|
| CohereResponseFormat.Type |
The format type
|
| CohereResponseJsonFormat |
|
| CohereResponseJsonFormat.Builder |
|
| CohereResponseTextFormat |
|
| CohereResponseTextFormat.Builder |
|
| CohereSystemMessage |
A message that represents a single chat dialog as SYSTEM role.
|
| CohereSystemMessage.Builder |
|
| CohereTool |
A definition of tool (function).
|
| CohereTool.Builder |
|
| CohereToolCall |
A tool call generated by the model.
|
| CohereToolCall.Builder |
|
| CohereToolMessage |
A message that represents a single chat dialog as TOOL role.
|
| CohereToolMessage.Builder |
|
| CohereToolResult |
The result from invoking tools recommended by the model in the previous chat turn.
|
| CohereToolResult.Builder |
|
| CohereUserMessage |
A message that represents a single chat dialog as USER role.
|
| CohereUserMessage.Builder |
|
| CompletionTokensDetails |
Breakdown of tokens used in a completion.
|
| CompletionTokensDetails.Builder |
|
| ContentModerationConfiguration |
Configuration for content moderation.
|
| ContentModerationConfiguration.Builder |
|
| ContentModerationResult |
The result of content moderation.
|
| ContentModerationResult.Builder |
|
| DedicatedServingMode |
The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
|
| DedicatedServingMode.Builder |
|
| DeveloperMessage |
Developer-provided instructions that the model should follow, regardless of messages sent by the
user.
|
| DeveloperMessage.Builder |
|
| Document |
The input of the document to rerank.
|
| Document.Builder |
|
| DocumentRank |
An object that contains a relevance score, an index and the text for a document.
|
| DocumentRank.Builder |
|
| EmbedTextDetails |
Details for the request to embed texts.
|
| EmbedTextDetails.Builder |
|
| EmbedTextDetails.InputType |
Specifies the input type.
|
| EmbedTextDetails.Truncate |
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
| EmbedTextResult |
The generated embedded result to return.
|
| EmbedTextResult.Builder |
|
| FunctionCall |
The function call generated by the model.
|
| FunctionCall.Builder |
|
| FunctionDefinition |
A function the model may call.
|
| FunctionDefinition.Builder |
|
| GeneratedText |
The text generated during each run.
|
| GeneratedText.Builder |
|
| GenerateTextDetails |
Details for the request to generate text.
|
| GenerateTextDetails.Builder |
|
| GenerateTextResult |
The generated text result to return.
|
| GenerateTextResult.Builder |
|
| GenericChatRequest |
Details for the chat request.
|
| GenericChatRequest.Builder |
|
| GenericChatRequest.ReasoningEffort |
Constrains effort on reasoning for reasoning models.
|
| GenericChatRequest.Verbosity |
Constrains the verbosity of the model’s response.
|
| GenericChatResponse |
The response for a chat conversation.
|
| GenericChatResponse.Builder |
|
| GroundingChunk |
object containing the source.
|
| GroundingChunk.Builder |
|
| GroundingMetadata |
Grounding metadata.
|
| GroundingMetadata.Builder |
|
| GroundingSupport |
chunk to connect model response text to the source in groundingChunk
Note: Objects should always be created or deserialized using the GroundingSupport.Builder.
|
| GroundingSupport.Builder |
|
| GroundingSupportSegment |
segment within groundingSupport.
|
| GroundingSupportSegment.Builder |
|
| GroundingWebChunk |
object containing the web source.
|
| GroundingWebChunk.Builder |
|
| GuardrailConfigs |
Additional configuration for each guardrail.
|
| GuardrailConfigs.Builder |
|
| GuardrailsInput |
The input data for applying guardrails.
|
| GuardrailsInput.Type |
The type of the input data.
|
| GuardrailsResults |
The results of applying each guardrail.
|
| GuardrailsResults.Builder |
|
| GuardrailsTextInput |
Represents a single instance of text in the guardrails input.
|
| GuardrailsTextInput.Builder |
|
| ImageContent |
Represents a single instance of chat image content.
|
| ImageContent.Builder |
|
| ImageUrl |
Provide a base64 encoded image or an image uri if it’s supported.
|
| ImageUrl.Builder |
|
| ImageUrl.Detail |
The default value is AUTO and only AUTO is supported.
|
| JsonObjectResponseFormat |
Enables JSON mode, which ensures the message the model generates is valid JSON.
|
| JsonObjectResponseFormat.Builder |
|
| JsonSchemaResponseFormat |
Enables Structured Outputs which ensures the model will match your supplied JSON schema.
|
| JsonSchemaResponseFormat.Builder |
|
| LlamaLlmInferenceRequest |
Details for the text generation request for Llama models.
|
| LlamaLlmInferenceRequest.Builder |
|
| LlamaLlmInferenceResponse |
The generated text result to return.
|
| LlamaLlmInferenceResponse.Builder |
|
| LlmInferenceRequest |
The base class for the inference requests.
|
| LlmInferenceRequest.RuntimeType |
The runtime of the provided model.
|
| LlmInferenceResponse |
The base class for inference responses.
|
| LlmInferenceResponse.RuntimeType |
The runtime of the provided model.
|
| Logprobs |
Includes the logarithmic probabilities for the most likely output tokens and the chosen tokens.
|
| Logprobs.Builder |
|
| Message |
A message that represents a single chat dialog.
|
| Message.Role |
Indicates who is writing the current chat message.
|
| OnDemandServingMode |
The model’s serving mode is on-demand serving on a shared infrastructure.
|
| OnDemandServingMode.Builder |
|
| PersonallyIdentifiableInformationConfiguration |
Configuration for personally identifiable information detection.
|
| PersonallyIdentifiableInformationConfiguration.Builder |
|
| PersonallyIdentifiableInformationResult |
An item of personally identifiable information.
|
| PersonallyIdentifiableInformationResult.Builder |
|
| Prediction |
Configuration for a Predicted Output, which can greatly improve response times when large parts
of the model response are known ahead of time.
|
| Prediction.Type |
The type of the predicted content you want to provide.
|
| PromptInjectionConfiguration |
|
| PromptInjectionConfiguration.Builder |
|
| PromptInjectionProtectionResult |
The result of prompt injection protection.
|
| PromptInjectionProtectionResult.Builder |
|
| PromptTokensDetails |
Breakdown of tokens used in the prompt.
|
| PromptTokensDetails.Builder |
|
| RerankTextDetails |
Details required for a rerank request.
|
| RerankTextDetails.Builder |
|
| RerankTextResult |
The rerank response to return to the caller.
|
| RerankTextResult.Builder |
|
| ResponseFormat |
An object specifying the format that the model must output.
|
| ResponseFormat.Type |
The format type
|
| ResponseJsonSchema |
The JSON schema definition to be used in JSON_SCHEMA response format.
|
| ResponseJsonSchema.Builder |
|
| SearchEntryPoint |
Contains the HTML and CSS to render the required Search Suggestions.
|
| SearchEntryPoint.Builder |
|
| SearchQuery |
The generated search query.
|
| SearchQuery.Builder |
|
| ServingMode |
The model’s serving mode, which is either on-demand serving or dedicated serving.
|
| ServingMode.ServingType |
The serving mode type, which is either on-demand serving or dedicated serving.
|
| StaticContent |
Static predicted output content, such as the content of a text file that is being regenerated.
|
| StaticContent.Builder |
|
| StreamOptions |
Options for streaming response.
|
| StreamOptions.Builder |
|
| SummarizeTextDetails |
Details for the request to summarize text.
|
| SummarizeTextDetails.Builder |
|
| SummarizeTextDetails.Extractiveness |
Controls how close to the original text the summary is.
|
| SummarizeTextDetails.Format |
Indicates the style in which the summary will be delivered - in a free form paragraph or in
bullet points.
|
| SummarizeTextDetails.Length |
Indicates the approximate length of the summary.
|
| SummarizeTextResult |
Summarize text result to return to caller.
|
| SummarizeTextResult.Builder |
|
| SystemMessage |
Represents a single instance of system message.
|
| SystemMessage.Builder |
|
| TextContent |
Represents a single instance of text in the chat content.
|
| TextContent.Builder |
|
| TextResponseFormat |
Enables TEXT mode.
|
| TextResponseFormat.Builder |
|
| TokenLikelihood |
An object that contains the returned token and its corresponding likelihood.
|
| TokenLikelihood.Builder |
|
| ToolCall |
The tool call generated by the model, such as function call.
|
| ToolCall.Type |
The type of the tool.
|
| ToolChoice |
The tool choice for a tool.
|
| ToolChoice.Type |
The tool choice.
|
| ToolChoiceAuto |
The model can pick between generating a message or calling one or more tools.
|
| ToolChoiceAuto.Builder |
|
| ToolChoiceFunction |
The tool choice for a function.
|
| ToolChoiceFunction.Builder |
|
| ToolChoiceNone |
The model will not call any tool and instead generates a message.
|
| ToolChoiceNone.Builder |
|
| ToolChoiceRequired |
The model must call one or more tools.
|
| ToolChoiceRequired.Builder |
|
| ToolDefinition |
A tool the model may call.
|
| ToolDefinition.Type |
The type of the tool.
|
| ToolMessage |
Represents a single instance of tool message.
|
| ToolMessage.Builder |
|
| UrlCitation |
Contains metadata for a cited URL included in the assistant\u2019s response.
|
| UrlCitation.Builder |
|
| Usage |
Usage statistics for the completion request.
|
| Usage.Builder |
|
| UserMessage |
Represents a single instance of user message.
|
| UserMessage.Builder |
|
| VideoContent |
Represents a single instance of chat video content.
|
| VideoContent.Builder |
|
| VideoUrl |
The base64 encoded video data or a video uri if it’s supported.
|
| VideoUrl.Builder |
|
| VideoUrl.Detail |
The default value is AUTO and only AUTO is supported.
|
| WebSearchOptions |
Options for performing a web search to augment the response.
|
| WebSearchOptions.Builder |
|
| WebSearchOptions.SearchContextSize |
Specifies the size of the web search context.
|