Chat in OCI Generative AI

Use the provided large language chat models in OCI Generative AI to ask questions and get conversational responses through an AI chatbot.

    1. In the navigation bar of the Console, select a region with Generative AI, for example, US Midwest (Chicago) or UK South (London). See which models are offered in your region.
    2. Open the navigation menu and click Analytics & AI. Under AI Services, click Generative AI.
    3. Select a compartment that you have permission to work in. If you don't see the playground, ask an administrator to give you access to Generative AI resources and then return to the following steps.
    4. Click Playground.
    5. Click Chat.
    6. Select a model for the chat experience by performing one of the following actions:
      • In the Model list, select a model such as meta.llama-3.1-70b-instruct, cohere.command-r-plus, cohere.command-r-16k, or a custom model. The custom models are displayed as model name (endpoint name).
      • Click View model details, select a model and then click Choose model.
      Note

      The meta.llama-3.1-405b-instruct model is not available for on-demand access in all regions. To access this model, try one of the following options:

      • Set up dedicated access: Switch to a a region supported for dedicated clusters for the meta.llama-3.1-405b-instruct chat model. Then, create a a hosting cluster and an endpoint for this model.
      • Switch to an on-demand region: Switch to the US Midwest (Chicago) region that's supported for on-demand inferencing for the meta.llama-3.1-405b-instruct chat model.

      Learn about costs and model retirements for on-demand and dedicated serving modes.

    7. Start a conversation by typing a prompt or selecting an example from the Example list to use as a base prompt or to learn from.
    8. (Optional) Set new values for the parameters. For parameter details, see Chat Model Parameters.
    9. Click Submit.
    10. Enter a new prompt or to continue your chat conversation, enter a follow-up prompt and click Submit.
    11. (Optional) To change the responses, click Clear chat, update the prompts and parameters, and click Submit. Repeat this step until you're happy with the output.
    12. (Optional) To copy the code that generated the output, click View code, select a programming language, click Copy code, paste the code into a file and save the file. Ensure that the file maintains the format of the pasted code.
      Tip

      If you're using the code in an application, ensure that you authenticate your code.
    13. (Optional) To start a new conversation, click Clear chat.
      Note

      • When you click Clear chat, the chat conversation is erased, but the model parameters remain unchanged, allowing you to continue using the last settings you applied.

        If you switch to a different feature, such as Generation, and then return to the Chat playground, both the chat conversation and model parameters will reset to their default values.

      Learn about Cohere chat parameters.

  • To chat, use the chat-result operation in Generative AI Inference CLI.

    Enter the following command for a list of options to use with the chat-result operation.

    oci generative-ai-inference chat-result -h

    For a complete list of parameters and values for the OCIGenerative AI CLI commands, see Generative AI Inference CLI and Generative AI Management CLI.

  • Run the Chat operation to chat using the large language models.

    For information about using the API and signing requests, see REST API documentation and Security Credentials. For information about SDKs, see SDKs and the CLI.