Elastic pool usage is billed to the pool leader and billing is based
on the elastic pool size and the actual hourly ECPU usage of the pool leader and the
members. Elastic pool usage can exceed the pool size (pool capacity can be up to four
times greater than the pool size).
The billing for an elastic pool consists of only compute resources, that is
ECPU usage, and all compute usage is charged to the Autonomous Database instance that is the pool leader. Any billing for storage
usage is charged separately to individual Autonomous Database instances, independent of whether the instance is in an
elastic pool.
An elastic pool allows you to consolidate your Autonomous Database instances in terms of
their compute resource billing. You can think of an elastic pool like a mobile phone
service “family plan”, except this applies to your Autonomous Database instances. Instead of paying individually for each
database, the databases are grouped into a pool in which one instance, the leader, is
charged for the compute usage associated with the entire pool.
Using an elastic pool you are able to provision up to four times the number
of ECPUs, over your selected pool size, and you can provision database instances that
are in the elastic pool with as little as 1 ECPU per database instance. Outside of an
elastic pool the minimum number of ECPUs per database instance is 2 ECPUs. For example,
with a pool size of 128 you can provision 512 Autonomous Database instances (when each instance has 1 ECPU). In this example
you are billed for the pool size compute resources, based on the pool size of 128 ECPUs,
while you have access to 512 Autonomous Database instances. In contrast, when you individually provision 512 Autonomous Database instances without using an
elastic pool you are required to allocate a minimum of 2 ECPUs for each Autonomous Database instance, and in this
example you would pay for 1024 ECPUs. Using an elastic pool provides up to 87%
compute cost savings.
After you create an elastic pool, the total ECPU usage for a given hour is
charged to the Autonomous Database instance
that is the pool leader. With the exception of the pool leader, individual Autonomous Database instances that are pool
members are not charged for ECPU usage while they are members of an elastic pool.
Elastic pool billing is as follows:
If the total aggregated peak ECPU utilization is equal to or below
the pool size for a given hour, you are charged for the pool size number of
ECPUs (one times the pool size).
After an elastic pool is created ECPU billing continues at a minimum
of one times the pool size rate, even when databases that are part of the pool
are stopped. This applies to pool member databases and to the pool leader.
In other words, if the aggregated peak ECPU utilization of the pool
is less than or equal to the pool size for a given hour, you are charged for the
pool size number of ECPUs (one times the pool size). This represents up to
87% compute cost savings over the case in which these databases are
billed separately without using elastic pools.
If the aggregated peak ECPU utilization of the pool leader and the
members exceeds the pool size at any point in time in a given billing hour:
Aggregated peak ECPU utilization of the pool is equal to
or less than two times of the pool size number of ECPUs: For
usage that is greater than one times the pool size number of ECPUs and
up to and including two times the number of ECPUs in a given billing
hour: Hourly billing is two times the pool size number of ECPUs.
In other words, if the aggregated peak ECPU utilization of
the pool exceeds the pool size, but is less than or equal to two times
the pool size for a given hour, you are charged for twice the pool size
number of ECPUs (two times the pool size). This represents up to 75%
compute cost savings over the case in which these databases are
billed separately without using elastic pools.
Aggregated peak ECPU utilization of the pool is equal to
or less than four times the pool size number of ECPUs: For usage
that is greater than two times the pool size number of ECPUs and up and
including to four times the pool size number of ECPUs in a given billing
hour: Hourly billing is four times the pool size number of ECPUs.
In other words, if the aggregated peak ECPU utilization of
the pool exceeds twice the pool size for a given hour, you are charged
for four times the pool size number of ECPUs (four times the pool size).
This represents up to 50% compute cost savings over the case in
which these databases are billed separately without using elastic
pools.
For example, consider an elastic pool with a pool size of 128 ECPUs
and a pool capacity of 512 ECPUs:
Case-1: The aggregated peak ECPU utilization of the
pool leader and the members is 40 ECPUs between 2:00pm and 2:30pm, and
128 ECPUs between 2:30pm and 3:00pm.
The elastic pool is billed 128 ECPUs, one times the pool
size, for this billing hour (2-3pm). This case applies when the peak
aggregated ECPU usage of the elastic pool for the billing hour is less
than or equal to 128 ECPUs.
Case-2: The aggregated peak ECPU utilization of the
pool leader and the members is 40 ECPUs between 2:00pm and 2:30pm, and
250 ECPUs between 2:30pm and 3:00pm.
The elastic pool is billed 256 ECPUs, two times the pool
size, for this billing hour (2-3pm). This case applies when the peak
aggregated ECPU usage of the elastic pool for the billing hour is less
than or equal to 256 ECPUs and greater than 128 ECPUs.
Case-3: The aggregated peak ECPU utilization of the
pool leader and the members is 80 ECPUs between 2:00pm and 2:30pm, and
509 ECPUs between 2:30pm and 3:00pm.
The elastic pool is billed 512 ECPUs, four times the pool
size, for this billing hour (2-3pm). This case applies when the peak
aggregated ECPU usage of the elastic pool for the billing hour is less
than or equal to 512 ECPUs and greater than 256 ECPUs.
Elastic Pool Billing when a Pool
is Created or Terminated
When an elastic pool is created or terminated, the leader is billed for
the full hour for the elastic pool. In addition, individual instances that are
either added or removed from the pool are billed for any compute usage that occurs
while the instance is not in the elastic pool (in this case the billing applies to
the individual Autonomous Database
instance).
Pool Creation Example: Assume there is an Autonomous Database instance with
4 ECPUs that is not part of any elastic pool. At 2:15pm, if you create an
elastic pool with this instance with a pool size of 128 ECPUs, the instance
becomes a pool leader. Assuming the Autonomous Database idles between 2-3pm, and there are no other
Autonomous Database instances
in the pool, billing for the hour between 2-3pm is as follows:
The bill for the period 2-3pm is: (4 * 0.25) + 128 = 129
ECPUs
Where the (4 * 0.25) is the billing for compute for the fifteen
minutes before the Autonomous Database instance created the elastic pool, and 128 ECPUs is the
billing for the elastic pool for the hour when the elastic pool is
created.
Pool Termination Example: Assume an Autonomous Database instance with
4 ECPUs is the leader of an elastic pool and the pool size is 128 ECPUs. At
4:30pm, if you terminate the elastic pool, the database becomes a standalone
Autonomous Database instance
that is not part of any elastic pool. Assuming the Autonomous Database idles between
4-5pm, and there are no other Autonomous Database instances in the pool, billing for the hour
between 4-5pm is as follows:
The bill for 4-5pm is: (4 * 0.5) + 128 = 130 ECPUs
Where the (4 * 0.5) is the billing for compute for the thirty
minutes after the Autonomous Database instance terminates the elastic pool, and 128 ECPUs is the
billing for the elastic pool for the hour when the elastic pool was
terminated.
Elastic Pool Billing with Built-in
Tools
For either the pool leader or the members, compute resources that are
allocated to the built-in tools, OML, Graph, or Data Transforms, are separate and do
not count towards the elastic pool total allocation. For billing purposes, the
elastic pool leader is billed for any built-in tool ECPU usage by either the leader
or elastic pool members, in addition to the elastic pool ECPU usage.
For example, assume there is an elastic pool with a pool size of 128
ECPUs; if in a given billing hour the aggregated peak ECPU utilization of the pool
leader and the members is 80 ECPUs for the billing hour, and during this hour the
combined total ECPU utilization for instances using built-in tools is 30 ECPUs, the
leader is charged for the pool size (128 ECPUs), plus the built-in tool ECPU usage
(30 ECPUs), for a total of 158 ECPUs for that hour.
Elastic Pool Usage Details in OCI
Usage Reports and OCI_USAGE_DATA View
You can obtain a detailed breakdown of elastic pool usage in the Oracle Cloud
Infrastructure (OCI) usage reports and this information is also shown in the
OCI_USAGE_DATA view. See Cost and Usage Reports and
OCI_USAGE_DATA View for more information.
Note
You can use OCI usage reports and
OCI_USAGE_DATA view for information on cost and usage
before January 31st, 2025. The OCI cost and usage reports are deprecated.
You can continue to access your existing usage report CSV files until July 31, 2025.
See Cost and Usage Reports for
more information.
The following table shows the product/resource column
values in an OCI usage report. The OCI usage report provides details on elastic pool
usage for the pool leader and for pool members a given billing hour (similar
information is available in OCI_USAGE_DATA view):
Elastic Pool Usage Type
Billing Hour Values Shown
Member Compute Usage
For a given pool member where the
product/resourceId column of the OCI usage
report is equal to pool member’s OCID and the
product/resource column of the OCI usage
report is equal to PIC_ADBS_DB_ECPU_PEAK, the
usage/billedQuantity of the OCI usage
report shows the peak ECPU usage of the member in a specified
billing hour.
You can use the following query to view the same
usage details in the OCI_USAGE_DATA view:
SELECT billed_quantity FROM OCI_USAGE_DATA
WHERE resource_name='PIC_ADBS_DB_ECPU_PEAK' and
resource_id=OCID_of_the_pool_member and
interval_usage_start=start_time and
interval_usage_end=end_time
Note
If a member has
a local a local Autonomous Data
Guard standby, its peak usage will be
reported as two times (2 x) the peak
usage.
Leader Compute Usage
For a given pool leader where the
product/resourceId column of the OCI usage
report is equal to pool leader's OCID and the
product/resource column of the OCI usage
report is equal to PIC_ADBS_DB_ECPU_PEAK, the
usage/billedQuantity of the OCI usage
report shows the peak ECPU usage of the leader in a specified
billing hour.
You can use the following query to view the same
usage details in the OCI_USAGE_DATA view:
SELECT billed_quantity FROM OCI_USAGE_DATA
WHERE resource_name='PIC_ADBS_DB_ECPU_PEAK' and
resource_id=OCID_of_the_pool_leader and
interval_usage_start=start_time and
interval_usage_end=end_time
Note
If the leader
has a local a local Autonomous Data
Guard standby, its peak usage will be
reported as two times (2 x) the peak
usage.
Aggregated Pool Compute Usage
For a given pool leader where the
product/resourceId column of the OCI usage
report is equal to pool leader's OCID and the
product/resource column of the OCI usage
report is equal to
PIC_ADBS_ELASTIC_POOL_DB_ECPU, the
usage/billedQuantity of the OCI usage
report shows the aggregated peak ECPU usage of the leader and
all members in the specified billing hour.
You can use the following query to view the same
usage details in the OCI_USAGE_DATA view:
SELECT billed_quantity FROM OCI_USAGE_DATA
WHERE resource_name='PIC_ADBS_ELASTIC_POOL_DB_ECPU' and
resource_id=OCID_of_the_pool_leader and
interval_usage_start=start_time and
interval_usage_end=end_time
Notes for elastic pool billing information in the OCI usage report and
the OCI_USAGE_DATA view:
Elastic pool aggregate peak ECPU usage is shown for terminated
databases.
Elastic pool aggregate peak ECPU usage is shown for non-pool
databases that were part of an elastic pool during a billing hour when the
database was a member of an elastic pool.
About Elastic Pool Billing with
Autonomous Data
Guard Enabled
🔗
The
elastic pool leader or members can enable either local or cross-region Autonomous Data
Guard, or both local and
cross-region Autonomous Data
Guard.
Local Autonomous Data
Guard Standby
Database Billing
When you add a local standby, a total of two times (2 x)
the primary's ECPU allocation is counted towards the pool capacity (1
x for the primary and 1 x for the standby).
Meaning a local standby multiplies the primary's peak usage by 2.
For example, if you create an elastic pool with a pool size of 128
ECPUs, with a pool capacity of 512 ECPUs, adding the following Autonomous Database instance uses the
elastic pool capacity:
1 instance with 256 ECPUs with local Autonomous Data
Guard
enabled, for a total of 512 ECPUs allocation from the pool.
When using this instance the CPU utilization is 256 ECPUs,
however the overall peak ECPU utilization will be reported as 512 because of
the local standby database 2 x multiplication factor. And
the billing is based on 4 x the pool size (512 ECPUs).
Similarly, if you create an elastic pool with a pool size of 128 ECPUs,
with a pool capacity of 512 ECPUs, adding the following Autonomous Database instances uses the
elastic pool capacity as follows:
128 instances with 2 ECPUs each, with local Autonomous Data
Guard
enabled, for a total of 512 ECPUs allocation from the pool.
When all of these databases are running, peak 100% ECPU
utilization, you get 256 ECPUs as your peak (128 *2 ECPUs per instance).
However, the overall peak ECPU utilization of the pool will be reported
as 512 because of the 2 x factor for the standby
databases. Billing in this case is based on 4 x the
pool size, or 512 ECPUs.
Cross-Region Autonomous Data
Guard Standby
Database Billing
Enabling Cross-region Autonomous Data
Guard for a leader or for member has no effect on the
elastic pool capacity. A cross-region Autonomous Data
Guard peer database has its own OCID and the cross-region
peer is billed independently from the elastic pool.
Note the following:
Cross-region Autonomous Data
Guard peer ECPUs do not use pool capacity and billing for Autonomous Data
Guard
cross-region peer databases happens on the peer instance.
When the leader of an elastic pool enables cross-region Autonomous Data
Guard, the
cross-region peer database ECPU allocation does not count towards the
elastic pool capacity. Billing for cross-region Autonomous Data
Guard is on
the cross-region instance, which is not part of the elastic pool (elastic
pools do not operate across regions).
When a member of an elastic pool enables cross-region Autonomous Data
Guard, the
cross-region peer ECPU allocation does not count towards the pool capacity.
Billing for cross-region Autonomous Data
Guard is on the cross-region instance, which is
not part of the elastic pool (elastic pools do not operate across
regions).
For example, if you create an elastic pool with a pool size of 128 ECPUs
(with a pool capacity of 512 ECPUs), adding the following Autonomous Database instances of different
sizes uses the entire elastic pool capacity:
A pool that contains the following instances:
1 instance with 128 ECPUs with cross-region Autonomous Data
Guard enabled (using a total of 128 ECPUs from the
pool).
64 instances with 2 ECPUs each with both local and cross-region
Autonomous Data
Guard enabled (using a total of 256 ECPUs from the
pool).
128 instances with 1 ECPU, each with cross-region Autonomous Data
Guard enabled (using 128 ECPUs from the pool).