You can define log sets for physically partitioning the logs based on your
infrastructure, application architecture or organizational structure to optimize
your search performance if you expect to persistently ingest more than 6 TB of log
data per day for a single tenant in a single region.
For more information about log partitioning, see Log Partitioning.
After you enable log partitioning for your tenancy in a specific region, you must
ensure that each log entry has a string value defined for the log set field.
Depending on your method of ingestion, you can follow the steps shared below
for specifying the log set value.
When log partitioning is enabled in your tenancy, the option to select the log
set appears in the scope filter. Before you initiate a query, you must
select at least one of your log sets or use wildcard *. It
is recommended that you use the wildcard only in use cases where you need to
search across all logs. This will be a more expensive search that will take
longer to execute as it will search across all of your logs. Queries may
time out if you have too much data over the chosen time range when
performing wildcard searches. See Use Scope Filters.
Enable Log Partitioning 🔗
Log Partitioning is available in all regions and realms where Oracle Logging Analytics is available.
If your use case requires the partitioning feature, then file a Service
Request with Oracle Support requesting this feature in Oracle Logging Analytics.
Specify Log Set Value: Log Ingestion Through
Management Agent 🔗
You may be collecting logs either through standalone Management Agent or
using it as part of the Oracle Cloud Agent.
There are multiple ways for you to define the log set by editing the
emd.properties file for the Management Agent. Select from one of the
following:
You can set a static log set value for all the logs
collected by the agent. Add the following property to the agent
emd.properties file to specify the agent-wide log set:
Set your log set value as a part of the log file name. You
can specify in the agent configuration a regular expression to define
how to extract part of the log file path as the log set string. In the
agent emd.properties configuration file, specify the properties
logsetkey and logSetExtRegex:
loganalytics.src.addl_src_ptn_configs=logsetkey=logorigin,logSetExtRegex=<regex with a capture group>
logorigin is the log file path.
You can specify to apply different log set extractions for
each source separately by adding the source id as a property to the
settings above.
//sets this log set value only for source 201904301
loganalytics.src.addl_src_ptn_configs=srcid=201904301,logset=<Value>
//sets this log set value based on regular expression extraction only for source 201904301
loganalytics.src.addl_src_ptn_configs=srcid=201904301,logsetkey=logorigin,logSetExtRegex=<regex with a capture group>
In addition to specifying the log set by using any of the methods above, include
the following property in the emd.properties configuration file:
loganalytics.src.override_config=true
Restart the agent after editing the configuration file.
Specify Log Set Value: Log Ingestion Through
Service Connector Hub 🔗
Currently, this feature is supported only for custom logs collected from OCI
Logging service into Oracle Logging Analytics
using Service Connector Hub.
To set the log set value, the custom log must add a new field to the
data{} block of the OCI Logging Unified Logging Format data.
For example,
Specify Log Set Value: Log Ingestion Through
Object Storage Collection 🔗
When creating an Object Storage collection rule, the same options used for
populating the log set in Management Agent collection are applicable.
You can set a static log set value for all the logs collected by this object
storage collection rule. Add the following property to the object storage
collection rule JSON:
If you are using overrides in the Object Collection Rule, you can also set the
log set extraction for the override. In the following example, all the logs
collected from the bucket will get the log set Value1 except
those objects that match the override containing db. Those logs
will get a log set string captured from the regular expression on the object
path.
Specify Log Set Value: Log Ingestion Through
REST API 🔗
When posting the log data files to the Log Events REST API endpoint
/actions/uploadLogEventsFile, you can specify the log set in the POST REST API
call parameters. All the log data uploaded in this single API call will be stored with the
same log set.
For example:
POST /20200601/namespaces/<namespaceName>/actions/uploadLogEventsFile?logGroupId=<logGroup_OCID>&payloadType=JSON&logSet=<Value>
Specify Log Set Value: Log Ingestion Through
Fluentd 🔗
When using the fluentd collector, the log set can be extracted from any
other field (keyname). Specify a regular expression that defines how to capture the expected
log set value from the field referenced by key_name into the field
oci_la_log_set.
<filter oci.source>
@type parser
key_name oci_la_log_path # The expression will be applied on the key "oci_la_log_path". You can pick any field that fluentd has parsed here.
<parse>
@type regexp
expression '.*\/(?<oci_la_log_set>[^\.]{1,40}).*' # Valid reg-ex for extraction
</parse>
</filter>
When the value of the oci_la_log_path is
/n/axs4r325r2ct/b/logevents/o/fileType/logSetABC.log, then the
extracted log set value based on the above regex will be
logSetABC.
Note
In the expression named capture group, (?<oci_la_log_set>) is
required. Without this, the regex matched string will not be assigned to
oci_la_log_set.