You can collect log data continuously from Oracle Cloud Infrastructure (OCI)
Object Storage. To enable the log collection, create ObjectCollectionRule resource
using REST API or CLI. After the successful creation of this resource and having the
required IAM policies, the log collection will be initiated.
You can use this method of collecting logs to ingest any type of log stored
in an object storage bucket.
You can collect logs from Object Storage bucket in one of the following ways:
LIVE: For continuous collection of objects from the time of
creation of the rule ObjectCollectionRule. This is the default
method.
HISTORIC: For onetime collection of objects for a specified
time range.
HISTORIC_LIVE: For collection of all historic logs in the
bucket, and after that, continuous collection of all newly created objects
containing logs.
Oracle Logging Analytics uses Events and Streaming services of OCI in conjunction with Object storage to collect and process objects (LIVE or HISTORIC types). When you configure a bucket for log collection, Oracle Logging Analytics creates an Events rule to emit event notification for every new object upload to the bucket. The notifications are delivered to a stream that you have specified.
A Stream OCID is required for object collection rules of type LIVE or HISTORIC_LIVE. This is used by Oracle Logging Analytics in creating Event rule and consume the event notifications created by the Object Storage. By default, the oldest available message in the stream is consumed first when processing messages in the stream. Otherwise, the existing cursor position is used, if already set.
Oracle Logging Analytics offers the following recommendations for creating stream:
Set the retention period of 48 hours.
Consider partitions based on throughput. Each partition can handle 1000 objects per second (across all the buckets which use the same stream). For more details about streaming limits, see Limits on Streaming Resources.
Optionally, consider having single stream for a tenancy.
Use this stream only for the purpose of object collection to avoid issues during log processing.
In addition to streamId, you can provide streamCursorType to specify which position in the stream to start consuming from. You can have four types of cursors to fetch messages from the stream. See Consuming Messages.
DEFAULT: Uses the existing cursor position if already set by any previous ObjectCollectionRule(s) using the same stream. Otherwise, it starts consuming from the oldest available message in the stream (similar to TRIM_HORIZON).
TRIM_HORIZON: Starts consuming from the oldest available message in the stream.
LATEST: Starts consuming messages that are published after the creation of this rule.
AT_TIME: Starts consuming messages from a specified time.
If streamCursorType is set to AT_TIME, then it also requires a streamCursorTime parameter, to indicate the time from which to consume the objects.
Note:
For proper functioning of log collection from object storage, ensure that the Event rules created by Oracle Logging Analytics are not tampered with.
Per bucket, you can have only one ObjectCollectionRule of type LIVE or HISTORIC_LIVE.
You can create up to 1000 unique object collection rules per tenancy in a region.
The object can be a single raw log file or any archive file (.zip, .gz, .tgz, .tar) containing multiple log files. The number of files inside an archive file must be less than 2000, including the directories if any.
The maximum size of the object (single file or archive file) is 1 GB. The uncompressed size of the object must be less than 10 GB.
For proper functioning of log collection, Oracle recommends that you use this stream only for the purpose of object collection.
Prerequisites: Before enabling log collection using this approach,
ensure to:
Create a new log source or use an Oracle-defined log source that matches your log format. See Create a Source.
Create a log group or use an existing log group where you will store these logs to control the user access control to the logs and note the log group OCID. See Create Log Groups to Store Your Logs.
For LIVE or HISTORIC_LIVE collection types, create a new stream or use an existing stream (that is used only for object collection). See Create a Stream.
To stop the collection of objects from the bucket, delete the
ObjectCollectionRule rule. This will only delete the associated configuration
with the bucket but will have no effect on the already collected log data or your
objects in the bucket.
Allow Log Collection from Object
Storage 🔗
The following IAM policy statements must be included in your policy to provide the
permission to the user group for performing the required operations on
ObjectCollectionRule:
allow group <group_name> to use loganalytics-object-collection-rule in compartment <object_collection_rule_compartment>
allow group <group_name> to {LOG_ANALYTICS_LOG_GROUP_UPLOAD_LOGS} in compartment <log_group_compartment>
allow group <group_name> to {LOG_ANALYTICS_ENTITY_UPLOAD_LOGS} in compartment <entity_compartment>
allow group <group_name> to {LOG_ANALYTICS_SOURCE_READ} in tenancy
allow group <group_name> to {BUCKET_UPDATE, BUCKET_READ, BUCKET_INSPECT} in compartment <object_store_bucket_compartment>
allow group <group_name> to {OBJECT_INSPECT, OBJECT_READ} in compartment <object_store_bucket_compartment>
allow group <group_name> to {STREAM_CONSUME, STREAM_READ} in compartment <stream_compartment>
If you are creating IAM policies at Oracle Logging Analytics aggregate resources level, then the following policy statements must
be included to use object collection:
allow group <group_name> to use loganalytics-features-family in tenancy
allow group <group_name> to use loganalytics-resources-family in compartment/tenancy
allow group <group_name> to use object-family in compartment <object_store_bucket_compartment>
allow group <group_name> to use stream-family in compartment <stream_compartment>
On the other hand, if you are creating IAM policies at the level of individual
resource-types, then the following policy statements are required to use object
collection:
allow group <group_name> to use loganalytics-object-collection-rule in compartment <object_collection_rule_compartment>
allow group <group_name> to use loganalytics-log-group in compartment <log_group_compartment>
allow group <group_name> to {LOG_ANALYTICS_ENTITY_UPLOAD_LOGS} in compartment <entity_compartment>
allow group <group_name> to read loganalytics-source in tenancy
allow group <group_name> to use object-family in compartment <object_store_bucket_compartment>
allow group <group_name> to use stream-family in compartment <stream_compartment>
group_name in all the above policy statements refers to the user
group that must be given the required permissions.
Note
By default, Object Storage disables automatic emission of events at object level. You can either enable emission of events or have required permissions while creating ObjectCollectionRule. To enable event emission, see Managing Objects. Also, when you delete and recreate a bucket, for the existing log collection rule to work, set the flag Emit Objects Events for the bucket after it is recreated.
For log collection to work, along with the above permissions for Creating ObjectCollectionRule, you must also give permission to Oracle Logging Analytics to read the objects from the bucket in your tenancy, use the stream to fetch the messages, and manage the cloud event rules in corresponding compartment or tenancy where the object storage bucket is located. Object Collection process uses Resource Principal against loganalyticsobjectcollectionrule resource to access the objects inside your bucket. The following are the additional IAM policy statements required:
Create a dynamic group Dynamic_Group_Name with below matching rule:
ALL {resource.type='loganalyticsobjectcollectionrule'}
Add the following additional IAM policy statements:
allow DYNAMIC-GROUP <Dynamic_Group_Name> to read buckets in compartment/tenancy
allow DYNAMIC-GROUP <Dynamic_Group_Name> to read objects in compartment/tenancy
allow DYNAMIC-GROUP <Dynamic_Group_Name> to manage cloudevents-rules in compartment/tenancy
allow DYNAMIC-GROUP <Dynamic_Group_Name> to inspect compartments in tenancy
allow DYNAMIC-GROUP <Dynamic_Group_Name> to use tag-namespaces in tenancy where all {target.tag-namespace.name = /oracle-tags/}
allow DYNAMIC-GROUP <Dynamic_Group_Name> to {STREAM_CONSUME} in compartment <stream_compartment>
Some of the above policy statements are included in the readily
available Oracle-defined policy templates. You may want to consider using the template
for your use case. See Oracle-defined Policy Templates for Common Use Cases.
ObjectCollectionRule Operations
🔗
Using REST API or CLI, you can perform operations such as Create,
Update, Delete, List, and
Get on the ObjectCollectionRule resource.
To communicate with OCI cloud services, create an API Signing Key and register it in your user account in OCI. To generate and register the key and to collect the tenancy's OCID and user's OCID, see Security Credentials - API Signing Key.
The following mandatory properties must be provided in the json:
name: A unique name given to the ObjectCollectionRule. The
name must be unique within the tenancy, and cannot be modified.
compartmentId: The OCID of the compartment in which the
ObjectCollectionRule is located.
osNamespace: The object storage namespace.
osBucketName: The name of the object storage bucket.
logGroupId: Logging Analytics log group OCID to associate the processed logs with.
logSourceName: Name of the Logging Analytics source to use for the processing.
streamId: A Stream OCID is required for object collection rules of type LIVE or HISTORIC_LIVE.
In addition to the mandatory properties, you can also optionally
specify the following properties:
collectionType: The type of collection. Allowed values are LIVE, HISTORIC, and HISTORIC_LIVE.
charEncoding: Example values ISO_8859_1, UTF-16
definedTags: Defined tags for this resource. Each key is predefined and scoped to a namespace. For example, {"foo-namespace": {"bar-key": "value"}}
description: A string that describes the details of the rule, not more than 400 characters long.
entityId: OCID of the Logging Analytics entity with which the collected logs will be associated.
freeformTags: Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. For example, {"bar-key": "value"}
pollSince: The oldest time of the file in the bucket to consider for HISTORIC or HISTORIC_LIVE collection. Accepted values are BEGINNING, CURRENT_TIME, or RFC3339 formatted datetime string. For example, 2019-10-12T07:20:50.52Z in RFC3339 format.
pollTill: The newest time of the file in the bucket to consider for HISTORIC or HISTORIC_LIVE collection. Accepted values are CURRENT_TIME or RFC3339 formatted datetime string. For example, 2019-10-12T07:20:50.52Z in RFC3339 format.
streamCursorType: Cursor type used to fetch messages from stream.
isForceHistoricCollection: Flag to allow historic collection if poll period overlaps with the existing ACTIVE collection rule. False, by default.
streamCursorTime: The time from which to consume the objects, if streamCursorType is AT_TIME. Accepted value must be RFC3339 formatted datetime string. For example, 2019-10-12T07:20:50.52Z.
oci log-analytics object-collection-rule list --namespace-name <Namespace> --compartment-id <compartment-OCID>
For Example:
oci log-analytics object-collection-rule list --namespace-name “My Namespace” --compartment-id ocid.compartment.oc1..exampleuniqueID
Get ObjectCollectionRule:
oci log-analytics object-collection-rule get --namespace-name <Namespace> --object-collection-rule-id <object-collection-rule-OCID>
For Example:
oci log-analytics object-collection-rule get --namespace-name “My Namespace” --object-collection-rule-id ocid1.loganalyticsobjectcollectionrule.oc1..exampleuniqueID
Add Overrides to ObjectCollectionRule:
If the ObjectCollectionRule already exists, create an
override json, for example update_override.json, with the
override conditions that you want to add to ObjectCollectionRule:
If you want to remove all the override conditions from the
ObjectCollectionRule, then create a json, for example
remove_overrides.json, that details the override
property as follows:
If the ObjectCollectionRule already exists, create a
filters json, for example filters.json, with the filters on
object names that you want to add to ObjectCollectionRule:
{
"objectNameFilters":["a/*","*audit*"]
}
Note
You can update an existing
ObjectCollectionRule with ObjectNameFilters only if it is
of the type LIVE or HISTORIC_LIVE. Type
HISTORIC is not supported for this operation.
Now update the ObjectCollectionRule to include the filters:
Allow Cross-Tenancy Log
Collection from Object Storage 🔗
Set the following policies to configure the object collection rule for collecting
logs from a bucket in a guest tenant.
Let Guest_Tenant be the guest tenant and Bucket_Compartment the compartment in that guest tenant which has the object storage buckets from which the logs must be collected. Let Host_Tenant be the tenant which is subscribed to Oracle Logging Analytics.
For additional information about writing policies that let your tenancy access Object
Storage resources in other tenancies, see Accessing Object Storage Resources Across
Tenancies in Oracle Cloud Infrastructure Documentation.
Create dynamic group <Dynamic_Group_Name> with the following matching rule:
ALL {resource.type='loganalyticsobjectcollectionrule'}
The stream can be available in either the Host_Tenant or the Guest_Tenant. Follow one of the work flows depending on where the stream is available:
Stream Available in Host_Tenant
Policies To Be Added in the Guest_Tenant
define group <Host_User_Group> as <Host_User_Group_OCID>
define tenancy <Host_Tenant> as <Host_Tenant_OCID>
define DYNAMIC-GROUP <Host_Dynamic_Group> as <Dynamic_Group_OCID>
admit group <Host_User_Group> of tenancy <Host_Tenant> to use buckets in compartment <Bucket_Compartment>
admit group <Host_User_Group> of tenancy <Host_Tenant> to {OBJECT_INSPECT, OBJECT_READ} in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to read buckets in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to read objects in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to manage cloudevents-rules in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to inspect compartments in tenancy
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to use tag-namespaces in tenancy where all {target.tag-namespace.name = /oracle-tags /}
Policies To Be Added in the Host_Tenant:
define tenancy <Guest_Tenant> as <Guest_Tenant_OCID>
endorse group <Host_User_Group> to use buckets in tenancy <Guest_Tenant>
endorse group <Host_User_Group> to {OBJECT_INSPECT, OBJECT_READ} in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to read buckets in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to read objects in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to manage cloudevents-rules in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to inspect compartments in tenancy <Guest_Tenant>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to use tag-namespaces in tenancy <Guest_Tenant> where all {target.tag-namespace.name = /oracle-tags /}
allow DYNAMIC-GROUP <Host_Dynamic_Group> to {STREAM_CONSUME} in compartment <Stream_Compartment>
allow group <Host_User_Group> to {STREAM_CONSUME, STREAM_READ} in compartment <Stream_Compartment>
allow group <Host_User_Group> to use loganalytics-object-collection-rule in compartment <Rule_Compartment>
allow group <Host_User_Group> to {LOG_ANALYTICS_LOG_GROUP_UPLOAD_LOGS} in compartment <LogGroup_Compartment>
allow group <Host_User_Group> to {LOG_ANALYTICS_SOURCE_READ} in tenancy
Optionally, define this policy statement if the ObjectCollectionRule has associated entities:
allow group <Host_User_Group> to {LOG_ANALYTICS_ENTITY_UPLOAD_LOGS} in compartment <Entity_Compartment>
Stream Available in Guest_Tenant
Policies To Be Added in the Guest_Tenant:
define group <Host_User_Group> as <Host_User_Group_OCID>
define tenancy <Host_Tenant> as <Host_Tenant_OCID>
define DYNAMIC-GROUP <Host_Dynamic_Group> as <Dynamic_Group_OCID>
admit group <Host_User_Group> of tenancy <Host_Tenant> to use buckets in compartment <Bucket_Compartment>
admit group <Host_User_Group> of tenancy <Host_Tenant> to {STREAM_CONSUME, STREAM_READ} in compartment <Stream_Compartment>
admit group <Host_User_Group> of tenancy <Host_Tenant> to {OBJECT_INSPECT, OBJECT_READ} in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to {STREAM_CONSUME} in compartment <Stream_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to read buckets in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to read objects in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to manage cloudevents-rules in compartment <Bucket_Compartment>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to inspect compartments in tenancy <Guest_Tenant>
admit DYNAMIC-GROUP <Host_Dynamic_Group> of tenancy <Host_Tenant> to use tag-namespaces in tenancy where all {target.tag-namespace.name = /oracle-tags /}
Policies To Be Added in the Host_Tenant:
define tenancy <Guest_Tenant> as <Guest_Tenant_OCID>
endorse group <Host_User_Group> to use buckets in compartment <Bucket_Compartment>
endorse group <Host_User_Group> to {OBJECT_INSPECT, OBJECT_READ} in compartment <Bucket_Compartment>
endorse group <Host_User_Group> to {STREAM_CONSUME, STREAM_READ} in compartment <Stream_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to read buckets in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to read objects in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to manage cloudevents-rules in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to inspect compartments in compartment <Bucket_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to {STREAM_CONSUME} in compartment <Stream_Compartment>
endorse DYNAMIC-GROUP <Host_Dynamic_Group> to use tag-namespaces in tenancy <Guest_Tenant> where all {target.tag-namespace.name = /oracle-tags /}
allow group <Host_User_Group> to use loganalytics-object-collection-rule in compartment <Rule_Compartment>
allow group <Host_User_Group> to {LOG_ANALYTICS_LOG_GROUP_UPLOAD_LOGS} in compartment <LogGroup_Compartment>
allow group <Host_User_Group> to {LOG_ANALYTICS_SOURCE_READ} in tenancy
Optionally, define this policy statement if the ObjectCollectionRule has associated entities:
allow group <Host_User_Group> to {LOG_ANALYTICS_ENTITY_UPLOAD_LOGS} in compartment <Entity_Compartment>
In the above policies,
Rule_Compartment: The compartment in which ObjectCollectionRule must be created.
LogGroup_Compartment: The compartment of Oracle Logging Analytics log group in which the logs must be stored.
Entity_Compartment: The compartment of Oracle Logging Analytics entity.
Stream_Compartment: The compartment of OCI Stream.
After the required policies are created, you can create ObjectCollectionRule to collect the logs from guest tenancy Object Storage. Provide the namespace osNamespace and bucket name osBucketName of the guest tenant in the JSON file, as shown in the following example:
{
"name": "<My_Rule>",
"compartmentId": "<Compartment_OCID>",
"osNamespace": "<Guest_Tenant_Namespace>", // Namespace of the guest tenant
"osBucketName": "<Guest_Tenant__Bucket1>", // Bucket in the guest tenant object store namespace
"logGroupId": "<Log_Group_OCID>",
"logSourceName": "<My_Log_Source>",
"streamId":"<Stream_OCID>"
}
Override
ObjectCollectionRule Configuration to Process Specific Objects
🔗
When you want to process specific objects inside a bucket by using a
configuration that's different from the one defined for ObjectCollectionRule, you can
use the override feature. The override feature facilitates to provide configuration
properties based on object names matching specific patterns or prefixes or directories
inside the bucket.
The properties for which you can specify overrides:
logSourceName, charEncoding,
entityId, timezone
The match type that you can use in the override: contains
The match value is always applied on the object name.
You can create a maximum of 10 overrides per ObjectCollectionRule, which is the default value. To change this default value, raise a Service Request (SR) with justification. The request will be reviewed.
If you are creating the ObjectCollectionRule for the first time, then one of the
following examples of the create.json can be used to build the json for your use
case:
Sample create.json with overrides to process object names containing audit with logSourceName myLOGANAuditSource:
Using the above rule, the objects containing audit are processed with logSourceName myLOGANAuditSource while all other objects in the bucket are processed with the logSourceName myLOGANSourceName.
Sample create.json with overrides to process object names containing audit with logSourceName myLOGANAuditSource and those object names containing dir1/ with charEncoding UTF-16:
After creating the configuration json, specify the file path when you create the
ObjectCollectionRule. See section Create ObjectCollectionRule in ObjectCollectionRule Operations.
If you have already created the ObjectCollectionRule, then create a json with the
details of the override and update the ObjectCollectionRule to include it. See
section Add Overrides to ObjectCollectionRule in ObjectCollectionRule Operations.
If you want to remove all the override conditions, then create a json as
specified in the section Remove Overrides from ObjectCollectionRule and update
the ObjectCollectionRule. See ObjectCollectionRule Operations.
Perform Selective Object
Collection by Applying Filters on Object Names 🔗
Use the Selective Objects Collection feature to collect only a subset of
objects in a given object storage bucket. The feature is supported with filters on object
names. When the filters are applied, only the objects matching the filters are collected for
processing.
The matchType property only supports exact match along with the
wildcard *. Following are some of the examples of filters using the
wildcard:
Filter objectName* specifies objects with the prefix objectName.
Filter *objectName specifies objects with the suffix objectName.
Filter *objectName* specifies objects that contain the text objectName.
If you are creating the ObjectCollectionRule for the first time, then one of the
following examples of the create.json can be used to build the json for your use
case:
Sample create.json with objectNameFilters to process object names that have a prefix a/ and contain the text audit:
If required, you can also use the override feature to specify different configuration for each of the filters. For the above example, you can additionally specify that all the objects containing the text audit must use the source myLOGANAuditSource:
After creating the configuration json, specify the file path when you create
the ObjectCollectionRule. See section Create ObjectCollectionRule in ObjectCollectionRule Operations.
If you have already created the ObjectCollectionRule, then create a
json with the details of the filters and update the ObjectCollectionRule to
include it. See section Add objectNameFilters to ObjectCollectionRule in ObjectCollectionRule Operations.