After creating the infrastructure for the sample ML Application, you can use the sample
ML Application project as a template to start building, deploying, and operating your own ML
Applications.
This project includes development best practices and provides the mlapp CLI,
a tool that simplifies ML Application development. To create an ML Application, you must use
the mlapp CLI, Terraform, OCI
SDK, or OCI CLI. You can't create an ML
Application in the Console, but you can view ML Applications
and their details there.
By starting your project with the sample ML Application project, you can progress your ML
Application implementation to production. The project is built using experience gained from
helping organizations to successfully deploy their applications to production.
The sample ML Application project is available here: sample-project.
Clone this project to use it as the foundation for your ML Application implementation.
The project includes documentation in README.md files, where you can find detailed information about the
project and its components.
Project Structure
The project consists of two main parts:
The infrastructure folder automates the creation of resources that the sample ML
Application depends on.
The ML Application folder contains the sample ML Application, including its
configuration, implementation, and the mlapp CLI.
Configuring Prerequisite Resources 🔗
Before you begin building and deploying the ML Application, you need to create the
necessary resources that the sample ML Application depends on (for example, logs,
log groups, a Data Science project, and a subnet). This process can be automated by
following these steps:
Prepare the environment folder.
You can use the default dev environment folder
(environments/dev) or use it as a template to create the
custom environment. To create the custom environment:
Make a copy of the development environment folder
(environments/dev).
Rename the copied folder to match the name of the environment.
Configure the environment.
Navigate to the environment folder (for example,
environments/dev).
Edit the input_variables.tfvars file to configure
the environment settings.
Don't reuse resources across different environments, as you might break
environment isolation. For example, sharing a resource between development
and QA environments might cause issues in development that might render the
QA environment unusable, delaying deployment to production.
Configuring the ML Application Environment 🔗
Prepare the environment folder.
You can use the default dev environment folder
(environments/dev) or use it as a template to create the
custom environment. To create the custom environment:
Make a copy of the development environment folder
(environments/dev).
Rename the copied folder to match the name of the environment.
Configure the environment.
In the environment configuration folder:
Edit the env-config.yaml file.
Edit the testdata-instance.json file (the Object
Storage namespace, after the at sign ('@') needs to be updated to
match the tenancy).
Define resource references.
Edit the arguments.yaml file in the environment
configuration folder to define references to the resources the
application depends on.
In the infrastructure/environments/<your
environment> folder, run
Copy
terraform output
Copy the output (excluding the last line) and paste it into
arguments.yaml , replacing =
with :.
Configuring and Initializing the mlapp CLI 🔗
Configure the ML Application Project.
Navigate to the-application folder.
Edit the application-def.yaml file to define the name
and description of the ML Application and its implementation.
Create default_env to set the environment as the
default (this removes the need to specify it on the command line when
using the mlapp CLI). You can copy
ml-application/default_env.example to
ml-application/default_env and store the
environment name there.
Initialize the environment.
In the ml-application folder,
run:
Copy
source ./mlapp init
This command adds the mlapp script to the PATH variable,
letting you run the CLI with mlapp instead of
./mlapp.
Building and Deploying the Application 🔗
With everything configured, you can now start building and deploying the
application.
Note
Learn about mlapp CLI commands by running:
Copy
mlapp -h
Build the application.
Copy
mlapp build
Deploy the application.
Copy
mlapp deploy
Instantiate the application.
Copy
mlapp instantiate
trigger a run of the training pipeline.
Copy
mlapp trigger
Test the predictive service.
Copy
mlapp predict
After running an mlapp CLI command, check the results by
navigating to ML Applications under OCI
Console / Analytics & AI / Machine
Learning.
Use Defined and Free-Form Tags 🔗
The sample application illustrates how to use both defined and free-form tags to ensure
tenant isolation and enable tracking of runtime resources, specifically models.
Defined tags are used to associate tenant-specific identifiers with resources such as model
deployments, storage buckets, and models.
For runtime resources created dynamically from code (such as models), add both a defined
tag and a free-form tag. The free-form tag links the resource to the instance, enabling
automatic deletion when the instance is removed.
Add defined and free-form tags to a model
Copy
model_id = xgb_model.save(display_name='fetal_health_model',
# needed for tenant isolation
defined_tags={"MlApplications": {"MlApplicationInstanceId": instance_id}},
# needed for ML App to be able to track created model
freeform_tags={"MlApplicationInstance": instance_id})