You can enable absolute or relative timestamps for chat messages. Absolute
timestamps display the exact time for each message. Relative timestamps display only on
the latest message and express the time in terms of the seconds, days, hours, months, or
years ago relative to the previous message.The precision afforded by absolute timestamps
make them ideal for archival tasks, but within the limited context of a chat session,
this precision detracts from the user experience because users must compare timestamps
to find out the passage of time between messages. Relative timestamps allow users to
track the conversation easily through terms like Just Now and A few moments
ago that can be immediately understood. Relative timestamps improve the user
experience in another way while also simplifying your development tasks: because
relative timestamps mark the messages in terms of seconds, days, hours, months, or years
ago, you don't need to convert them for timezones.
When you configure the timestamp (timestampType: 'relative'), an
absolute timestamp displays before the first message of the day as a header. This header
displays when the conversation has not been cleared and older messages are still
available in the history.
This timestamp is updated at following regular intervals (seconds, minutes,
etc.) until a new message is received.
For first 10s
Between 10s-60s
Every minute between 1m-60m
Every hour between 1hr-24hr
Every day between 1d-30d
Every month between 1m-12m
Every year after first year
When a new message is loaded into the chat, the relative timestamp on the previous
message is removed and a new timestamp appears on the new message displaying the time
relative to the previous message. At that point, the relative timestamp updates until
the next messages arrives.
Action Buttons Layout π
Feature flag: actionsLayout
actionsLayout sets layout direction for the local, global, card and
form actions. When you set this as LayoutOrientation.HORIZONTAL, these
buttons are laid out horizontally and will wrap if the content overflows.
Use this feature to restrict, or filter, the item types that are available in the share
menu popup, set the file size limit for uploads (such as 1024 in the following snippet),
and customize the menu's icons and labels.
Note
Before you can configure
shareMenuItems, you must set enableAttachment
to
true.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
ArrayList<Object> customItems = new ArrayList<>(Arrays.asList(shareMenuCustomItem1,shareMenuCustomItem2,shareMenuCustomItem3,ShareMenuItem.CAMERA));
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(sharedPreferences.getString(getString(R.string.pref_name_chat_server_host), Settings.CHAT_SERVER_URL), false, getApplicationContext())
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.shareMenuItems(customItems)
.enableAttachment(true)
.build();
If a ShareMenuCustomItem object has no value or a null
for the label, as does shareMenuCustomItem3 =
ShareMenuCustomItem('csv') in the preceding snippet, then a
type string thatβs suffixed to share_ becomes the
label. For shareMenuCustomItem3, the label is
share_csv.
Note
You can allow users to upload all file types by
setting the type of a ShareMenuCustomItem object
as *.
public static void
shareMenuItems(ArrayList<Object> shareMenuItems) π
You can dynamically update the share menu items popup by calling the
Bots.shareMenuItems(customItems); API, where
customItems is an ArrayList of
Objects. Each object can either be of type
ShareMenuItem enum values or an object of
ShareMenuCustomItem.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
customItems.add(shareMenuCustomItem1);
customItems.add(ShareMenuItem.CAMERA);
customItems.add(ShareMenuItem.FILE);
customItems.add(shareMenuCustomItem2);
customItems.add(shareMenuCustomItem3);
Bots.shareMenuItems(customItems);
public static void shareMenuItems() π
You can get the share menu items list by calling the
Bots.shareMenuItems();
API.
Bots.shareMenuItems()
Auto-Submitting a Field π
When a field has the autoSubmit property set to
true, the client sends a
FormSubmissionMessagePayload with the
submittedField map containing either the valid field values that
have been entered so far. Any fields that are not set yet (regardless of whether they
are required), or fields that violate a client-side validation are not included in the
submittedField map. If the auto-submitted field itself contains a
value that's not valid, then the submission message is not sent and the client error
message displays for that particular field. When an auto-submit succeeds, the
partialSubmitField in the form submission message will be set to
the id of the autoSubmit field.
Replacing a Previous Input Form π
When the end user submits the form, either because a field has
autosubmit set to true, the skill can send a new
EditFormMessagePayload. That message should replace the previous
input form message. By setting the replaceMessage channel extension
property to true, you enable the SDK to replace previous input form
message with the current input form message.
Connect and Disconnect Methods π
The skill can be connected and disconnected using the public void
disconnect() and public void connect() methods. The
WebSocket is closed after calling the direct
method:
Bots.disconnect();
Calling the
following method re-establishes the WebSocket connection if the skill has been in a
disconnected
state:
Bots.connect();
When public void connect(Botsconfiguration botsconfiguration) is called
with a new botsconfiguration object, the existing WebSocket connection
is closed and a new connection is established using the new
botsconfiguration
object.
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.build();
Bots.connect(botsConfiguration);
Use enableDefaultClientResponse: true to provide default
client-side responses accompanied by a typing indicator when the skill response has been
delayed, or when there's no skill response at all. If the user sends out the first
message/query, but the skill does not respond with the number of seconds set by the
odaas_default_greeting_timeout flag, the skill can display a
greeting message that's configured using the
odaas_default_greeting_message translation string. Next, the client
checks again for the skill's response. The client displays the skill's response if it
has been received, but if it hasn't, then the client displays a wait message (configured
with the odaas_default_wait_message translation string) at intervals
set by the odaas_default_wait_message_interval flag. When the wait for
the skill response exceeds the threshold set by the
typingIndicatorTimeout flag, the client displays a sorry response
to the user and stops the typing indicator. You can configure the sorry response using
the odaas_default_sorry_message translation string.
Delegation π
Feature configuration: messageModifierDelegate
The delegation feature lets you set a delegate to receive callbacks before certain
events in the conversation. To set a delegate, a class must implement the
interface MessageModifierDelegate and pass its instance to the
messageModifierDelegate
property.
private MessageDelegate implements MessageModifierDelegate {
@Override
public Message beforeSend(Message message) {
// Handle before send delegate here
}
@Override
public Message beforeDisplay(Message message) {
if (message != null && message.getPayload() != null && message.getPayload().getType() == MessagePayload.MessageType.CARD) {
((CardMessagePayload)message.getPayload()).setLayout(CardLayout.VERTICAL);
}
return message;
}
@Override
public Message beforeNotification(Message message) {
// Handle before notification delegate here
}
}
@Override
public void beforeEndConversation(CompletionHandler completionHandler) {
// Handle before end conversation end delegate here
// Trigger completionHandler.onSuccess() callback after successful execution of the task.
// Trigger completionHandler.onFailure() callback when the task is unsucessful.
}
}
public Message
beforeDisplay(Message message) π
The public Message beforeDisplay(Message message) delegate allows a
skill's message to be modified before it is displayed in the conversation.
The modified message that's returned by the delegate displays in the
conversation. If the method returns null, then the message
is not displayed.
public Message
beforeDisplay(Message message) π
The public Message beforeDisplay(Message message) delegate allows a
user message to be modified before it is sent to the chat server. The message returned
by the delegate is sent to the skill. If it returns null, then the message is not
sent.
public Message
beforeNotification(Message message) π
The public Message beforeNotification(Message message) delegate
allows a skill's message to be modified before a notification is triggered. If it
returns null, then the notification is not triggered.
Display the Conversation
History π
You can either enable or display of a user's local conversation histor after the SDK
has been re-initialized by setting displayPreviousMessages to
true or false in the bots configuration. When set
to false, previous messages are not displayed for the user, after
re-initialization of SDK.
End the Chat Session π
FeatureFlag: enableEndConversation: true
enableEndConversation: true adds a close button to the header view
that enables users to explicitly end the current chat session. A confirmation prompt
dialog opens when users click this close button and when they confirm the close action,
the SDK sends an event message to the skill that marks the end of the chat session. The
SDK then disconnects the skill from the instance, collapses the chat widget, and erases
the current user's conversation history. The SDK triggers a delegate on
beforeEndConversation(CompletionHandler completionHandler) which
can be used to perform a task before sending close session request to server. It also
raises a OnChatEnd() event that you can register for.
Opening the chat widget afterward starts a new chat session.
public static void endChat() π
The conversation can also be dynamically ended by calling
Bots.endChat()
API.
Bots.endChat()
CompletionHandler π
CompletionHandler is an event listener that is implemented on the
SDK, which listens for completion of the task being performed on the
beforeEndConversation(CompletionHandler completionHandler) delegate
in the host application. Refer to the Javadoc included with the SDK available from the
ODA and OMC download page.
Headless SDK π
The SDK can be used without its UI. To use it in this mode, import only the
com.oracle.bots.client.sdk.android.core-24.12.aar package into the project
as described in Add the Oracle Android Client SDK to the Project.
The SDK maintains the connection to server and provides APIs to send messages, receive
messages, and get updates for the network status and for other services. You can use the
APIs to interact with the SDK and update the UI.
You can send a message using any of the send*() APIs
available in Bots class. For example, public static void
sendMessage(String text) sends text message to skill or digital
assistant.
public static void
sendMessage(String text) π
Sends a text message to the skill. Its text parameter is the text
message.
Bots.sendMessage("I want to order a Pizza");
EventListener π
To listen for the connection status change, the message sent to the skill and
received from the skill, and the attachment upload status events, a class should
implement the EventListener interface, which then implements the
functionality for:
void onStatusChange(ConnectionStatus
connectionStatus) β This method is called when the WebSocket
connection status changes. Its connectionStatus parameter is
the current status of the connection. Refer to the Javadocs included in the SDK
(available from the ODA and OMC download page
) for more details about the ConnectionStatus
enum.
void onMessageReceived(Message message) β This
method is called when a new message is received from the skill. Its
message parameter is the message received from the skill.
Refer to the Javadocs included in the SDK (available from the ODA and OMC download page
) for more details about the Message class.
void onMessageSent(Message message) - This method
is called when a message is sent to the skill. Its message parameter is the
message sent to the skill. Refer to the Javadocs included in the SDK (available
from the ODA and OMC download page
) for more details about the Message class.
void onAttachmentComplete() β This method is
called when an attachment upload has completed.
public class BotsEventListener implements EventListener {
@Override
public void onStatusChange(ConnectionStatus connectionStatus) {
// Handle the connection status change
}
@Override
public void onMessageReceived(Message message) {
// Handle the messages received from skill/DA
}
@Override
public void onMessageSent(Message message) {
// Handle the message sent to skill or Digital Assistant
}
@Override
public void onAttachmentComplete() {
// Handle the post attachment upload actions
// Close the attachment upload progress popup if any etc.
}
}
The
instance of type EventListener should then be passed to
setEventListener(EventListener eventListener).
public static void
setEventListener(EventListener eventListener) π
Sets the listener to receive the response returned from the skill to get updates on
connection status change and to receive an update when the attachment upload is
complete. Its eventListener parameter is an instance of type
EventListener to receive
updates.
Bots.setEventListener(new BotsEventListener());
In-Widget Webview π
Feature flag: linkHandler
You can configure the link behavior in chat messages to allow users to access web
pages from within the chat widget. Instead of having to switch from the conversation to
view a page in a tab or separate browser window, a user can remain in the chat because
the chat widget opens the link within a Webview.
Configure the In-Widget
Webview π
Feature flag: webViewConfig
You can configure the webview linking behavior by setting the
linkHandler function to
WebviewLinkHandlerType.WEBVIEW. You can set the size and display of
the webview itself using a webViewConfig class
object:
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.linkHandler(WebviewLinkHandlerType.WEBVIEW)
.webViewConfig(new WebViewConfig()
.webViewSize(WebviewSizeWindow.FULL)
.webViewTitleColor(<COLOR_VALUE>)
.webviewHeaderColor(<COLOR_VALUE>)
.clearButtonLabel(<BUTTON_TEXT>)
.clearButtonLabelColor(<COLOR_VALUE>)
.clearButtonIcon(<IMAGE_ID>))
.build();
As illustrated in this code snippet, you can set the following attributes for the
webview.
Attribute
Settings
webViewSize
Sets the screen size of the in-widget webview window with the
WebviewSizeWindow enum, which has two values:
PARTIAL
(WebviewSizeWindow.PARTIAL) and
FULL (WebviewSizeWindow.FULL).
clearButtonLabel
Sets the text used for clear/close button in the top right corner
of webview. The default text is DONE.
clearButtonIcon
Sets an icon for the clear button, which appears left-aligned
inside the button.
clearButtonLabelColor
Sets the color of text of clear button label.
clearButtonColor
Sets the background color for the clear button.
webviewHeaderColor
Sets the background color for webview header.
webviewTitleColor
Sets the color of title in the header. The title is the URL of
the web link that has been opened.
Multi-Lingual Chat π
Feature flag: multiLangChat
The Android SDK's native language support enables
the chat widget to both detect a user's language and allow the
user to select the conversation language from a dropdown menu in
the header. Users can switch between languages, but only in
between conversations, not during a conversation because the
conversation gets reset whenever a user selects a new language.
Enable the Language Menu π
You can enable a menu that allows users to select a preferred language from a
dropdown menu by defining the multiLangChat property with an object
containing the supportedLanguage ArrayList, which is comprised of
language tags (lang) and optional display labels
(label). Outside of this array, you can optionally set the default
language with the primary property as illustrated by the
(primary("en") in the following
snippet.
ArrayList<SupportedLanguage> supportedLanguages = new ArrayList<>();
supportedLanguages.add(new SupportedLanguage("en"));
supportedLanguages.add(new SupportedLanguage("fr", "French"));
supportedLanguages.add(new SupportedLanguage("de", "German"));
MultiLangChat multiLangChat = new MultiLangChat().supportedLanguage(supportedLanguages).primary("en");
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(<SERVER_URI>, false, getApplicationContext()) // Configuration to initialize the SDK
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.multiLangChat(multiLangChat)
.build();
The
chat widget displays the passed-in supported languages in a dropdown menu that's located in the
header. In addition to the available languages, the menu also includes a
Detect Language option. When a user selects a language from
this menu, the current conversation is reset, and a new conversation is started with the
selected language. The language selected by the user persists across sessions in the
same browser, so the user's previous language is automatically selected when the user
revisits the skill through the page containing the chat widget.
Here are some things to keep in mind when configuring multi-language
support:
You need to define a minimum of two languages to enable the
dropdown menu to display.
If you omit the primary key, the widget
automatically detects the language in the user profile and selects
the Detect Language option in the menu.
Disable Language Menu π
Starting with Version 21.12, you can also configure and update the chat language
without also having to configure the language selection dropdown menu by passing
primary in the initial configuration without the
supportedLanguage ArrayList. The value passed in the
primary variable is set as the chat language for the
conversation.
Language Detection π
In addition to the passed-in languages, the chat widget displays a
Detect Language option in the dropdown.
Selecting this option tells the skill to automatically detect the
conversation language from the user's message and, when possible, to respond
in the same language.
Note
If
you omit the primary property, the widget
automatically detects the language in the user profile and activates
the Detect Language option in the
menu.
You can dynamically update the selected language by calling the
setPrimaryChatLanguage(lang) API. If the passed
lang matches one of the supported languages, then that language is
selected. When no match can be found, Detect Language is
activated. You can also activate the Detected Language option by
calling Bots.setPrimaryChatLanguage('und') API, where
'und' indicates undetermined.
You can update the chat language dynamically using the
setPrimaryChatLanguage(lang) API even when the
dropdown menu has not been configured.
Multi-Lingual Chat Quick
Reference π
To do this...
...Do this
Display the language selection dropdown to end users.
Define multiLangChat property with
an object containing the supportedLanguage
ArrayList.
Set the chat language without displaying the language
selection dropdown menu to end users.
Define primary only.
Set a default language.
Pass primary with the
supportedLanguage Arraylist. The
primary value must be one of the supported
languages included the array.
Enable language detection.
Pass primary as
und.
Dynamically update the chat language.
Call the
setPrimaryChatLanguage(lang) API.
Share Menu Options π
By default, the share menu displays options for the following file types:
visual media files (images and videos)
audio files
general files like documents, PDFs, and spreadsheets
location
By passing an ArrayList of Objects to shareMenuItems
shareMenuItems(Arraylist<Object>), you can restrict, or filter, the type
of items that are available in the menu, customize the menu's icons and labels, and
limit the upload file size (such as 1024 in the following snippet). These objects can
either be an object of shareMenuCustomItem, or
ShareMenuItem enum values that are mapped to the share menu items:
ShareMenuItem.CAMERA for the camera menu item (if supported by the
device), ShareMenuItem.VISUAL for sharing an image or video item,
ShareMenuItem.AUDIO for sharing an audio item, and
ShareMenuItem.FILE for sharing a file item. Passing either an empty
value or a null value displays all of the menu items that can be passed as
ShareMenuItem enum values.
If a ShareMenuCustomItem object has no value or a null for the label as
does shareMenuCustomItem3 = ShareMenuCustomItem('csv') in the following
snippet, then a type string that's suffixed to share_ becomes the
label. For shareMenuCustomItem3, the label is
share_csv. You can allow users to upload all file types by setting
the type of a ShareMenuCustomItem object as *.
Note
This configuration only applies when
enableAttachment is set to
true.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
ArrayList<Object> customItems = new ArrayList<>(Arrays.asList(shareMenuCustomItem1,shareMenuCustomItem2,shareMenuCustomItem3,ShareMenuItem.CAMERA));
BotsConfiguration botsConfiguration = new BotsConfiguration.BotsConfigurationBuilder(sharedPreferences.getString(getString(R.string.pref_name_chat_server_host), Settings.CHAT_SERVER_URL), false, getApplicationContext())
.channelId(<CHANNEL_ID>)
.userId(<USER_ID>)
.shareMenuItems(customItems)
.enableAttachment(true)
.build();
public static void
shareMenuItems() π
You can get the share menu items list by calling the
Bots.shareMenuItems();
API.
Bots.shareMenuItems()
public static void
shareMenuItems(ArrayList<Object> shareMenuItems) π
You can dynamically update the share menu items popup by calling the
Bots.shareMenuItems(customItems); API, where
customItems is an ArrayList of Objects. Each object can either be
of type ShareMenuItem enum values or an object of
ShareMenuCustomItem.
ArrayList<Object> customItems = new ArrayList<>();
ShareMenuCustomItem shareMenuCustomItem1 = new ShareMenuCustomItem("pdf bin", "Label1", 1024, R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem2 = new ShareMenuCustomItem("doc", "Label2", R.drawable.odaas_menuitem_share_file);
ShareMenuCustomItem shareMenuCustomItem3 = new ShareMenuCustomItem("csv");
customItems.add(shareMenuCustomItem1);
customItems.add(ShareMenuItem.CAMERA);
customItems.add(ShareMenuItem.FILE);
customItems.add(shareMenuCustomItem2);
customItems.add(shareMenuCustomItem3);
Bots.shareMenuItems(customItems);
Speech Recognition π
Feature flag: enableSpeechRecognition
Setting the enableSpeechRecognition feature flag to
true enables the microphone button to display along with the send
button whenever the user input field is empty.
Setting this property to true also supports the
functionality enabled by the enableSpeechRecognitionAutoSend property,
which when also set to true, enables the user's speech response to be
sent to the chat server automatically while displaying the response as a sent message in
the chat window. You can allow users to first edit (or delete) their dictated messages
before they send them manually by setting
enableSpeechRecognitionAutoSend to false.
Speech recognition is utilized through the following methods:
public static void
startRecording(IBotsSpeechListener listener) π
Starts recording the user's voice message. The listener parameter
is an instance of IBotsSpeechListener to receive the response returned
from the server.
public static void
stopRecording() π
Stops recording the user's voice message.
public static boolean
isRecording() π
Checks whether the voice recording has started or not. Returns true
if the recording has started. Otherwise, it returns false.
IBotsSpeechListener π
A class should implement the interface IBotsSpeechListener which
then implements the functionality for the following methods:
This method is called when errors occur while establishing the connection to the
server, or when there is either no input given or when too much input is given. Its
error parameter is the error message.
void onSuccess(String
utterance) π
This method is called when a final result is received from the server. Its
utterance parameter is the final utterance received from the
server.
This method is called when a final result is received from the server. Its
parameter, botsSpeechResult, is the final response received from the
server.
void onPartialResult(String
utterance) π
This method is called when a partial result is received from the server. Its
utterance parameter is the partial utterance
received from the server.
void onClose(int code, String
message) π
This method is called when the connection to server closes.
Parameters:
code β The status code
message β The reason for closing the connection
void onOpen() π
The method called when the connection to server opens.
onActiveSpeechUpdate(byte[]
speechData) π
This method is called when there is an update in the user's voice message, which can
then be used for updating the speech visualizer. It's parameter is speechData, the byte
array of the recorded voice of
user.
public class BotsSpeechListener implements IBotsSpeechListener {
@Override
public void onError(String error) {
// Handle errors
}
@Override
public void onSuccess(String utterance) {
// This method was deprecated in release 20.8.1.
// Handle final result
}
@Override
public void onSuccess(BotsSpeechResult botsSpeechResult) {
// Handle final result
}
@Override
public void onPartialResult(String utterance) {
// Handle partial result
}
@Override
public void onClose(int code, String message) {
// Handle the close event of connection to server
}
@Override
public void onOpen() {
// Handle the open event of connection to server
}
@Override
public void onActiveSpeechUpdate(byte[] speechData) {
// Handle the speech update event
}
}
Bots.startRecording(new BotsSpeechListener()); // Start voice recording
if (Bots.isRecording()) {
Bots.stopRecording(); // Stop voice recording
}
The SDK has been integrated with speech synthesis to read the skill's message aloud
when a new message is received from skill:
Users can mute or unmute the skill's audio response using a button that's
located in the header of the chat view. You enable this feature by setting the
enableSpeechSynthesis feature flag to
true.
You can set the preferred language that read the skill's messages aloud with the
speechSynthesisVoicePreferences property. This parameter
that sets the language and voice is a list of
SpeechSynthesisSetting instances (described in the SDK's
Javadoc that you download from the ODA and OMC download
page). This property enables a fallback when the device doesn't support
the preferred language or voice. If the device does not support the preferred
voice, then the default voice for the preferred language is used instead. When
neither the preferred voice or language are supported, then the default voice
and language are used.
public static void
initSpeechSynthesisService() π
Initializes the speech synthesis service. This method should be called in the
onCreate() method of an Activity to initialize the speech synthesis
service. The initialization of speech synthesis service will be done when the SDK
library initializes only if the enableSpeechSynthesis feature flag is
set to
true.
public class ConversationActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Bots.initSpeechSynthesisService();
}
}
public static void
startBotAudioResponse(String text) π
Starts reading the skill's response aloud. Its text parameter is
the text for the skill's message that's read
aloud.
Bots.startBotAudioResponse("What kind of crust do you want?");
Note
This
method was deprecated in Release 21.08.
public static void
stopBotAudioResponse() π
Stops reading the skill's response
aloud.
Bots.stopBotAudioResponse()
public static boolean
isSpeaking() π
Checks if the skill's response is currently being read aloud or not.
Returns true if the skill's response is currently being read aloud.
Otherwise, it returns
false.
if (Bots.isSpeaking()) {
Bots.stopBotAudioResponse();
}
public static void
shutdownBotAudioResponse() π
Releases the resources used by the SDK.
This method is called in the onDestroy() method of
ConversationActivity.
public class ConversationActivity extends AppCompatActivity {
@Override
protected void onDestroy() {
super.onDestroy();
Bots.shutdownBotAudioResponse();
}
}
Speech Service Injection π
Feature flag : ttsService
The speechSynthesisService feature flag allows you to inject any
text-to-speech (TTS) service -- your own, or one provided by a third-party vendor --
into the SDK. To inject a TTS service, you must first set the
enableSpeechSynthesis feature flag to true and
then pass an instance of the SpeechSynthesisService interface to the
speechSynthesisService flag.
The SpeechSynthesisService Interface π
You create an instance of a class that's an implementation of the
SpeechSynthesisService interface. It implements these methods:
initTextToSpeechService(@NonNull Application application,
@NonNull BotsConfiguration botsConfiguration): Initializes a new
TTS service.
Parameter
Description
application
The application. This cannot be
null.
botsConfiguration
The BotsConfiguration
object used to control the features of the library. This
cannot be null.
speak(String phrase): Adds a phrase that's to be
spoken to the utterance queue. It's phrase parameter is the
text to be spoken.
isSpeaking(): Checks whether or not the audio
response is being spoken. It returns false if there is no
ongoing audio response is being spoken.
Note
This method was deprecated in
Release 21.08.
stopTextToSpeech(): Stops any ongoing speech
synthesis.
Note
This method was deprecated in Release 21.08.
shutdownTextToSpeech(): Releases the resources
used by the TextToSpeech engine.
getSpeechSynthesisVoicePreferences(): Returns the
voice preferences array which is used to choose the best match for the available
voice that's used for speech synthesis.
setSpeechSynthesisVoicePreferences(ArrayList<SpeechSynthesisSetting>
speechSynthesisVoicePreferences): Sets the voice preferences array
which is used to choose the best available voice match for speech synthesis. The
speechSynthesisVoicePreferences parameter is the voice
preference array for speech synthesis.
onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting>
speechSynthesisVoicePreferences): Sets the speech synthesis voice
to the best available voice match.
Note
This method was deprecated in Release
21.08.
We recommend that you call this method inside the
setSpeechSynthesisVoicePreferences method after setting the
voice preferences ArrayList. The
speechSynthesisVoicePreferences parameter is the voice
preference array for speech synthesis.
onSpeechRecognitionLocaleChange(Locale
speechLocale): This method gets invoked when the speech recognition
language has changed. By overriding this method, you can set the speech
synthesis language to the same language as the speech recognition language. The
speechLocale parameter is the locale set for speech
recognition.
private class TextToSpeechServiceInjection implements SpeechSynthesisService {
@Override
public void initTextToSpeechService(@NonNull Application application, @NonNull BotsConfiguration botsConfiguration) {
// Initialisation of Text to Speech Service.
}
@Override
public void speak(String phrase) {
// Adds a phrase to the utterance queue to be spoken
}
@Override
public boolean isSpeaking() {
// Checks whether the bot audio response is being spoken or not.
return false;
}
@Override
public void stopTextToSpeech() {
// Stops any ongoing speech synthesis
}
@Override
public void shutdownTextToSpeech() {
// Releases the resources used by the TextToSpeech engine.
}
@Override
public ArrayList<SpeechSynthesisSetting> getSpeechSynthesisVoicePreferences() {
// The voice preferences array which is used to choose the best match available voice for speech synthesis.
return null;
}
@Override
public void setSpeechSynthesisVoicePreferences(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences) {
// Sets the voice preferences array which can be used to choose the best match available voice for speech synthesis.
}
@Override
public SpeechSynthesisSetting onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting> speechSynthesisVoicePreferences) {
// Sets the speech synthesis voice to the best voice match available.
return null;
}
@Override
public void onSpeechRecognitionLocaleChange(Locale speechLocale) {
// If the speech recognition language is changed, the speech synthesis language can also be changed to the same language.
}
}
Note
SpeechSynthesisService#setSpeechSynthesisVoicePreferencesonSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting>)
and
SpeechSynthesisService#onSpeechSynthesisVoicePreferencesChange(ArrayList<SpeechSynthesisSetting>)
have been deprecated in this release and have been replaced by
SpeechSynthesisService#setTTSVoice(ArrayList<SpeechSynthesisSetting>)
and SpeechSynthesisService#getTTSVoice(). Previously,
SpeechSynthesisService#setSpeechSynthesisVoicePreferencesonSpeechSynthesisVoicePreferencesChange
set the speech synthesis voice preference array and
SpeechSynthesisService#onSpeechSynthesisVoicePreferencesChange
set the best voice available for speech synthesis and returned the selected voice.
Now, the same functionality is attained through the new methods:
SpeechSynthesisService#setTTSVoice(ArrayList<SpeechSynthesisSetting>
TTSVoices), which sets both the speech synthesis voice preference array
and the best available voice for speech synthesis and
SpeechSynthesisService#getTTSVoice(), which returns the
selected voice for speech synthesis.
Typing Indicator for User-Agent
Conversations π
Feature flag: enableSendTypingStatus
When enabled, the SDK sends a RESPONDING typing event along
with the text that's currently being typed by the user to . This shows a typing
indicator on the agent console. When the user has finished typing, the SDK sends a
LISTENING event to Oracle B2C
Service or Oracle Fusion
Service. This hides the typing indicator on the agent console.
Similarly, when the agent is typing, the SDK receives a
RESPONDING event from the service. On receiving this event, the SDK
shows a typing indicator to the user. When the agent is idle, the SDK receives
LISTENING event from the service. On receiving this event, the SDK
hides the typing indicator that's shown to the user.
The sendUserTypingStatus API enables the same behavior for
headless
mode.
public void sendUserTypingStatus(TypingStatus status, String text)
To show the typing indicator on the agent
console:
To hide the typing indicator on the agent
console:
Bots.sendUserTypingStatus("LISTENING", "");
To control user-side typing indicator, use the
onReceiveMessage(Message message) event. For
example:
public void onReceiveMessage(Message message) {
if (message != null) {
MessagePayload messagePayload = message.getPayload();
if (messagePayload instanceof StatusMessagePayload) {
StatusMessagePayload statusMessagePayload = (StatusMessagePayload) messagePayload;
String status = statusMessagePayload.getStatus();
if (status.equalsIgnoreCase(String.valueOf(TypingStatus.RESPONDING))) {
// show typing indicator
} else if (status.equalsIgnoreCase(String.valueOf(TypingStatus.LISTENING))
// hide typing indicator
}
}
}
There are two more settings that provide additional control:
typingStatusInterval β By default, the SDK sends
the RESPONDING typing event every three seconds to the service.
Use this flag to throttle this event. The minimum value that can be set is three
seconds.
enableAgentSneakPreview - Oracle B2C
Service supports showing the user text as it's being entered. If this flag is set to
true (the default is false), then the SDK
sends the actual text. To protect user privacy, the SDK sends β¦
instead of the actual text to Oracle B2C
Service when the flag is set to false.
You can enable dynamic updating of the user avatar at runtime.
public void updatePersonAvatar π
Sets the user avatar for the all the messages, including previous
messages.
ConversationActivity.setUserPerson(Object);
Expose Agent Details π
Use these APIs to modify agent name, the text color, avatar, agent name initials,
text color, and avatar background.
public AgentDetails getAgentDetails() π
Returns an object containing the agent
details.
Bots.getAgentDetails(AgentDetails);
Refer
to the Javadocs for more details about the
AgentDetails class.
public void
setAgentDetails(AgentDetails) π
Overrides the agent details received from
server.
Bots.setAgentDetails(AgentDetails);
public AgentDetails getAgentDetails() π
Returns an object containing the agent
details.
Bots.getAgentDetails(AgentDetails);
Refer
to the Javadocs for more details about the
AgentDetails class.
Voice Visualizer π
When voice support is enabled
(enableSpeechRecognition(true)), the footer of the chat widget
displays a voice visualizer, a dynamic visualizer graph that indicates the frequency
level of the voice input. The visualizer responds to the modulation of the user's voice
by indicating whether the user is speaking too softly or too loudly. This visualizer is
created using the stream of bytes that are recorded while the user is speaking, which is
also exposed in the IBotsSpeechListener#onActiveSpeechUpdate(byte[])
method for use in headless mode.
The chat widget displays a voice visualizer when users click the voice icon.
It's an indicator of whether the audio level is sufficiently high enough for the SDK to
capture the userβs voice. The userβs message, as it is recognized as text, displays
below the visualizer.
Note
Voice mode is
indicated when the keyboard icon appears.
When enableSpeechRecognitionAutoSend(true), the recognized
text is automatically sent to the skill after the user has finished dictating the
message. The mode then reverts to text input. When
enableSpeechRecognitionAutoSend(false), the mode also reverts to
text input, but in this case, users can modify the recognized text before sending the
message to the skill.