Triggers let ML Applications providers specify the triggering mechanism for their ML
jobs or pipelines, easing the implementation of fully automated MLOps.
Triggers are the entry points for the runs of the ML workflows and are defined by YAML files
within the ML Applications package as instance components. Triggers are automatically created
when a new ML Applications instance is created but only when all other instance components are
created. So, when a trigger is created, it can refer to other instance components created
before. The jobs or ML Pipelines that are part of your application can be triggered by
defining triggers as instance components in your ML Application. Triggers let providers and
consumers start runs of some workflows.
Within the trigger definition, providers can specify:
Trigger target
Defines what is run. For example, a new pipeline or job run is created when the
trigger is activated or invoked.
Trigger condition
Defines when the trigger is run. You can define which HTTP endpoints (WebHooks)
activate or invoke the trigger or events such as instance was
created.
Trigger parameters
Define which parameters can be passed to the trigger upon its activation (invocation).
You can pass the parameter values further to the trigger target. For example, you can
pass a reference to a container image that's started in your pipeline or job.
Triggers can be activated or invoked by:
HTTP-based triggering
Triggers can be fired in response to HTTP requests. Two endpoints that enable users to
make HTTP requests for firing triggers.
Provider endpoint: Available on the ML Applications instance view resource, it's
meant to be used by providers.
Consumer endpoint: Available on the ML Applications instance resource, it's meant
to be used by consumers.
ML Applications providers have the option to enable these endpoints in any
combination as follows:
Only provider endpoint.
Only consumer endpoint.
Both provider and consumer endpoints.
None
Event-based triggering
Triggers are fired in response to a lifecycle event on a particular resource. The only
supported resource is the ML Applications instance. The supported lifecycle events are:
Creating an ML Applications instance: let providers implement ML Applications with
a one-time training run.
Upgrading of ML Applications instance: let providers add the ability to retrain
the model when a new version of the ML Applications Implementation (for example,
with a new training algorithm) is deployed.
The trigger target is a create operation payload for OCI resources which are created when the trigger
condition is met. The supported target types are DataScienceJobRun and
DataSciencePipelineRun.
Within the trigger targets, you can refer to implicit variables, for example,
Implicit variable references are replaced
with actual values when the instance is created, updated, or upgraded. Parameter references
are replaced with values when the trigger is activated. For more information see Implicit Variables for ML Applications Packages or Parameterized Triggers.
Defining Triggers
Triggers are defined as YAML files under the instance_components
directory in the application package. Triggers files use the extension
.trigger.yaml and must follow the schema as follows:
apiVersion
description: the version of the schema for this configuration file.
required: true
type: string
kind
description: the kind of resource (only ml-application-trigger
supported).
required: true
type: string
metadata
description: the metadata for a particular resource.
required: true
type: object (map)
properties (supported metadata):
name
description: the name as an identifier for a particular instance of the
resource.
required: true
type: string
spec
description: the specification of a particular resource.
required: true
type: object
properties:
parameters
description: A map of parameters that can be passed to the trigger upon its
activation (invocation).
required: false
type: the map (parameter name maps to parameter properties)
the parameter name must match "\w+" regexp (at least
one alphanumeric character).
parameter properties:
mandatory
type: Boolean (true or false)
required: false (default is false)
Description: Whether the particular argument is mandatory or
not.
Note
Mandatory trigger parameters aren't allowed for triggers
that have consumerEndpoint or any kind of
event-based condition.
description
type: string
required: false
description: The parameter description.
validationRegexp
type: string
required: false
description: The regular expression used for validation of argument
value.
defaultValue
type: string
required: false
description: The value used when the parameter isn't specified in
the activation (invocation) request. The default value must be
specified for optional parameters. It might be specified for mandatory
parameters. When the validationRegexp is specified,
the default value must match.
condition
description: The condition that defines when the trigger fires.
required: true
type: object
properties:
requests
description: The list of sources for direct trigger requests. For
each such request, the trigger tries to fire.
required: one of
type: array of objects
item type properties:
source
description: The source of trigger requests: if it's present
in the array it means the trigger fires on request from this
resource.
required: true
type: enum
enum values:
providerEndpoint: if the source with this type is
presented in the requests section, triggering using
HTTP request for providers is enabled
(/mlApplicationInstanceView/<mlApplicationInstanceViewId>/action/trigger).
consumerEndpoint: if the source with this type is
presented in the requests section, triggering using
HTTP request for consumers is enabled
(/mlApplicationInstance/<mlApplicationInstanceId>/action/trigger).
events
description: Events for which the trigger fires.
required: one of
type: array (items are polymorphic objects)
common properties for items (parent:polymorphism):
source
description: The event source: if it's present in the
array it means that the trigger fires based on incoming
events coming from this event source (the first part of the
discriminator for various events).
required: true
type: enum
enum values:
mlApplicationInstance: The ML Applications
instance is an event source. Supported types:
onCreate, onVersionUpgrade
type
description: The event type (common property for all event
sources: the second part of the discriminator for
events).
description: Create a payload for a resource used as a target. It
can contain various placeholders that are dynamically resolved and
replaced with actual values. Two types of placeholders exist: implicit
variables and trigger parameters.
type: object (expected JSON payload, note: JSON is valid YAML)
The placeholders must use the format ${variable_name}, that is, the placeholder
must begin with $ and must be followed by the variable name inside braces
{}.
jobId in the DataScienceJobRun template must refer to, or resolve to, a
DataScienceJob application component belonging to the ML Applications implementation.
pipelineId in the DataSciencePipelineRun template must refer to, or
resolve to, a DataSciencePipeline application component belonging to the ML Applications
Implementation.
Triggers can parameterize targets (Job or Pipeline runs) by referring to implicit
variables. However, implicit variables are updated only when the instance is created,
updated, or upgraded. When you need to pass a specific parameter with a value that's
known only at the time of the trigger activation (invocation), you can use trigger
parameters.
Trigger parameters can be optionally defined in the trigger YAML file. When the
parameters are defined, you can include their names and values in the payload of the
trigger activation (invocation) requests. All your references to the parameters in the
target definition are replaced with actual values provided in the payload of the
request.
To activate (invoke) a parameterized trigger, you need to send an HTTP POST to the
trigger endpoint. For example:
Triggers that are defined by providers in ML Applications packages as YAML files can be
exposed to both providers and consumers. Providers and consumers can activate (fire or
trigger) the triggers and start a run of a pipeline or job.
The resource principal that's used to start the run differs for consumer and provider
invocations.
When consumers activate triggers, the ML Applications instance resource principal is used
(datasciencemlappinstanceint). On the other hand, when providers activate
triggers, the ML Applications instance view resource principal is used
(datasciencemlappinstanceviewint).
This implies that you need to define policies that let the instance or instance view resource
principal create runs. Because runs depend on networking and logging, you must let the
resource principals use networking and logging too. For details, see the Policy Setup section.