To control who has access to Data Science and the type
of access for each group of users, you must create policies.
To monitor Data Science resources, you must be given the required access in a policy. This is true whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must give you access to the monitoring services and the resources being monitored. If you try to perform an action and get a message that you don't have permission or are unauthorized, confirm with an administrator the type of access you've been granted, and which compartment you can work in. For more information on user authorizations for monitoring, see the Authentication and Authorization section for the related service, Monitoring or Notifications.
By default, only the users in the Administrators group have access to all Data Science resources. For everyone else who's involved with
Data Science, you must create new policies that assigns them
proper rights to Data Science resources.
Data Science offers both aggregate and individual
resource-types for writing policies.
You can use aggregate resource types to write fewer policies. For example, instead of allowing
a group to manage data-science-projects,
data-science-notebook-sessions, data-science-models, and
data-science-work-requests, you can have a policy that allows the group to
manage the aggregate resource type, data-science-family.
Aggregate Resource Type
data-science-family
Individual Resource Types
data-science-projects
data-science-notebook-sessions
data-science-models
data-science-model-deployments
data-science-work-requests
data-science-jobs
data-science-job-runs
data-science-pipelines
data-science-pipeline-runs
data-science-private-endpoint
data-science-schedule
Supported Variables 🔗
To add conditions to your policies, you can either use OCI
general variables or service-specific variables.
The user that creates a notebook is the only user that can open and use it.
Examples of Various Operations
Copy
allow group <data_science_hol_users> to manage data_science_projects
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to manage data_science_models
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to manage data_science_work_requests
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to inspect data_science_notebook_sessions
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to read data_science_notebook_sessions
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to {DATA_SCIENCE_NOTEBOOK_SESSION_CREATE}
in compartment <datascience_hol>
Copy
allow group <data_science_hol_users> to
{DATA_SCIENCE_NOTEBOOK_SESSION_DELETE,DATA_SCIENCE_NOTEBOOK_SESSION_UPDATE,DATA_SCIENCE_NOTEBOOK
_SESSION_OPEN,DATA_SCIENCE_NOTEBOOK_SESSION_ACTIVATE,DATA_SCIENCE_NOTEBOOK_SESSION_DEACTIVATE}
in compartment <datascience_hol>
where target.notebook-session.createdBy = request.user.id
Details for Verbs + Resource Type Combinations 🔗
There are various OCI verbs and resource types that you can
use to create a policy.
A policy syntax is like this:
Copy
allow <subject> to <verb> <resource_type> in <location> where <conditions>.
The following describe the permissions and API operations covered by each verb for Data Science. The level of access is cumulative as you go from
inspect to read to use to
manage. A plus sign (+) in a table cell indicates
incremental access compared to the cell directly above it, whereas "no extra" indicates no
incremental access.
The APIs covered for the data-science-schedule resource-type are listed
here. The APIs are displayed alphabetically for each permission.
Verbs
Permissions
APIs Fully Covered
APIs Partially Covered
inspect
DATA_SCIENCE_SCHEDULE_INSPECT
ListSchedule
ListWorkRequests
No extra
read
inspect +
DATA_SCIENCE_SCHEDULE_READ
inspect +
GetSchedule
GetWorkRequest
CreateNotebookSession (You also need manage data-science-notebook-sessions.)
CreateModel (You also need manage data-science-models.)
CreateJob
CreateJobRun (You also need create data-science-job.)
use
read +
DATA_SCIENCE_SCHEDULE_USE
read +
UseSchedule
No extra
manage
use +
DATA_SCIENCE_SCHEDULE_UPDATE
DATA_SCIENCE_SCHEDULE_CREATE
DATA_SCIENCE_SCHEDULE_DELETE
DATA_SCIENCE_SCHEDULE_MOVE
use +
UpdateSchedule
CreateSchedule
DeleteSchedule
ChangeScheduleCompartment
No extra
Policy Examples 🔗
Note
The APIs cover the Data Science aggregate
data-science-family and individual resource types. For
example, allow group <group_name> to manage data-science-family in
compartment <compartment_name> is the same as writing the
following four policies:
Copy
allow group <group_name>> to manage <data_science_projects> in compartment
<compartment_name>
Copy
allow group <group_name> to manage data-science-notebook-sessions in compartment
<compartment_name>
Copy
allow group <group_name> to manage data-science-models in compartment
<compartment_name>
Copy
allow group <group_name> to manage data-science-work-requests in compartment
<compartment_name>
Note
For a step by step guide to configuring policies, see: Creating Policies in the Manually Configuring
a Data Science Tenancy tutorial.
Example: List View
Allows a group to simply view the list of all Data Science models in a specific compartment:
Copy
allow group <group_name> to inspect data-science-models in compartment
<compartment_name>
The read verb for data-science-models covers the
same permissions and API operations as the inspect verb with the
DATA_SCIENCE_MODEL_READ permission and the API operations that
it covers, such as GetModel and GetModelArtifact.
Example: All Operations
Allows a group to perform all the operations listed for
DATA_SCIENCE_MODEL_READ in a specified compartment:
Copy
allow group <group_name> to read data-science-models in compartment
<compartment_name>
The manage verb for data-science-models includes
the same permissions and API operations as the read verb, plus the
APIs for the DATA_SCIENCE_MODEL_CREATE,
DATA_SCIENCE_MODEL_MOVE,
DATA_SCIENCE_MODEL_UPDATE, and
DATA_SCIENCE_MODEL_DELETE permissions. For example, a user can
delete a model only with the manage permission or the specific
DATA_SCIENCE_MODEL_DELETE permission. With a
read permission for data-science-models, a
user can't delete the models.
Examples: Manage All Resources
Allows a group to manage all the resources for Data Science use:
Copy
allow group <group_name> to manage <data_science_family> in compartment
<compartment_name>
Allows a group to manage all the Data Science
resources, except for deleting the Data Science
projects:
Copy
allow group <group_name> to manage <data_science_family> in compartment
<compartment_name> where request.permission !='DATA_SCIENCE_PROJECT_DELETE'
The APIs covered for the data-science-projects resource-type are
listed here. The APIs are displayed alphabetically for each permission.
Policy Examples 🔗
We identified these policy statements that you're likely to adopt in a tenancy for model deployments:
Allows a group of users, <group-name> to perform all CRUD operations on models stored in the model catalog. Any user who wants to deploy a model through model deployment also needs to access the model they want to deploy.
Copy
allow group <group-name> to manage data-science-models
in compartment <compartment-name>
Allows a group of users, <group-name> to perform all CRUD operations, including calling the predict endpoint, on model deployment resources in a particular compartment. You can change the manage verb to limit what the users can do.
Copy
allow group <group-name> to manage data-science-model-deployments
in compartment <compartment-name>
Lets a dynamic group of resources (such as notebook sessions)
perform all CRUD operations, including calling the predict endpoint, on model
deployment resources in a particular compartment. The manage verb
can be changed to limit what the resources can do.
Copy
allow dynamic-group <dynamic-group-name> to manage data-science-model-deployments
in compartment <compartment-name>
Or, you can authorize resources to do the same. Only the dynamic
group of resources in the specified dynamic group can call the model endpoint for
the model deployment resources created in a specific compartment.
Copy
allow dynamic-group <dynamic-group-name-2> to {DATA_SCIENCE_MODEL_DEPLOYMENT_PREDICT}
in compartment <compartment-name>
(Optional) Lets a model deployment access the published conda
environments stored in an Object Storage bucket. This is required if you want to use
Published Conda Environments to capture the third-party dependencies of a model.
Copy
allow any-user to read objects in compartment <compartment-name>
where ALL { request.principal.type='datasciencemodeldeployment',
target.bucket.name=<published-conda-envs-bucket-name> }
(Optional) Lets a model deployment emit logs to the Logging service. You
need this policy if you're using Logging in a model deployment. This statement is
permissive. For example, you could restrict the permission to use log-content in a
specific
compartment.
Copy
allow any-user to use log-content in tenancy
where ALL {request.principal.type = 'datasciencemodeldeployment'}
(Optional) Lets a model deployment access an Object Storage bucket that
resides in a tenancy. For example, a deployed model reading files (a lookup CSV
file) from an Object Storage bucket that you manage.
Copy
allow any-user to read objects in compartment <compartment-name>
where ALL { request.principal.type='datasciencemodeldeployment', target.bucket.name=<bucket-name> }
Examples for Jobs and Job runs
(Optional) You can integrate logging for jobs. When enabled, the job run resource
requires permissions to emit logs to the Logging
service. You must create a job runs dynamic group with:
Copy
all { resource.type='datasciencejobrun', resource.compartment.id='<job-run-compartment-ocid>' }
Then allow this dynamic group to write to the Logging service logs:
Copy
allow dynamic-group <job-runs-dynamic-group> to use log-content in compartment <your-compartment-name>
Lastly, the user starting the job runs must also have access to use log groups and
logs:
Note
If you use an instance principal dynamic group to create and start job runs, then
you must apply group policies to the dynamic group. Specifically, the instance
principal must have the to manage log-groups policy set.
Copy
allow group <group-name> to manage log-groups in compartment <compartment-name>
allow group <group-name> to use log-content in compartment <compartment-name>
(Optional) There are no extra policies required to run jobs with a Data Science conda environment. To run jobs with a
published custom conda environment, the job run resource requires permissions to
download the conda environment from your tenancy's Object Storage. You must allow
the job runs dynamic group to access objects in your compartment with:
Copy
allow dynamic-group <job-runs-dynamic-group> to read objects in compartment <compartment-name> where target.bucket.name='<bucket-name>'
To pull the container image from OCIR, add this policy:
Copy
allow dynamic-group <your-dynamic-group> to read repos in compartment <compartment-name>
If your repository is in the root compartment, you must allow read for the tenancy
with:
Copy
allow dynamic-group <your-dynamic-group> to read repos in tenancy where all {target.repo.name=<repository-name>}
Examples for Pipelines
Data Science uses other OCI services to run pipelines, mostly jobs.
To function correctly, pipelines require permissions to operate those resources on your
tenancy or compartment. You must create dynamic groups and policies to use Data Science pipelines.
Create a new dynamic group or update an existing dynamic group to add the following rows:
To allow pipeline runs to access OCI services
such as Logging, Networking, Object Storage, and so
on:
Copy
all {resource.type='datasciencepipelinerun',resource.compartment.id='ocid1.compartment.oc1..<>'}
If your pipeline includes at least one job as a step, you must allow the job run to
access resources:
Copy
all {resource.type='datasciencejobrun',resource.compartment.id='ocid1.compartment.oc1..<>'}
When working from notebook sessions using Resource Principal authentication, you'll need to allow the notebook to access resources:
Copy
all {resource.type='datasciencenotebooksession',resource.compartment.id='ocid1.compartment.oc1..<>'}
Now, add the relevant policies to allow your dynamic group to access the resources in a compartment or tenancy. Following are some useful example policies for your dynamic group:
(Optional) Allow to manage all Data Science resources
such as notebooks, jobs, pipelines, and so on:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to manage data-science-family in compartment <YOUR_COMPARTMENT_NAME>
(Optional) Allow to use networking including the use of OCI
Object Storage and File Storage Service:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to use virtual-network-family in compartment <YOUR_COMPARTMENT_NAME>
(Optional) Allow to manage Object Storage:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to manage objects in compartment <YOUR_COMPARTMENT_NAME>
(Optional) Allow to log to Logging service logs:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to use log-content in compartment <YOUR_COMPARTMENT_NAME>
(Optional) Allow to read
repos:
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to read repos in compartment <YOUR COMPARTMENT NAME>
(Optional) Allow to use Object Storage buckets as storage
mounts:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> to use object-family in compartment <YOUR_COMPARTMENT_NAME>
allow service datascience to use object-family in compartment <YOUR_COMPARTMENT_NAME>
(Optional) Allow to use File Storage as storage
mounts:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> use file-systems in compartment <YOUR_COMPARTMENT_NAME>
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> use mount-targets in compartment <YOUR_COMPARTMENT_NAME>
allow service datascience to use file-systems in compartment <YOUR_COMPARTMENT_NAME>
allow service datascience to use mount-targets in compartment <YOUR_COMPARTMENT_NAME>
(Optional) If you use Data Flow applications as steps in pipelines:
Copy
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> manage dataflow-run in compartment <YOUR_COMPARTMENT_NAME>
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> read dataflow-application in compartment <YOUR_COMPARTMENT_NAME>
allow dynamic-group <YOUR_DYNAMIC_GROUP_NAME> read object-family in compartment <YOUR_COMPARTMENT_NAME>
Ensure that users working with pipelines are granted the appropriate privileges. The
following policy assumes that the users belong to the datascienceusers
group.
Copy
allow group datascienceusers to inspect compartments in tenancy
allow group datascienceusers in tenancy where all {target.rule.type='managed', target.event.source in ('dataflow')}
allow group datascienceusers to read dataflow-application in compartment <YOUR_COMPARTMENT_NAME>