Sizing the Data Flow Application
Every time you run a Data Flow Application you specify a size and number of executors which, in turn, decide the number of OCPUs used to run the Spark application.
An OCPU is equal to a CPU core, which itself is equal to two vCPUs. See Compute Shapes for more information on how many OCPUs each shape contains.
A rough guide is to assume 10 GB of data processed per OCPU per hour. Optimized data formats such
as Parquet appear to run much faster because only a small subset of data is processed.
The formula to calcualte the number of OCPUs needed, assuming 10 GB of data processed
per OCPU per hour,
is:
<Number_of_OCPUs> = <Processed_Data_in_GB> / (10 * <Desired_runtime_in_hours>)
For example, to process 1 TB of data with an SLA of 30 minutes, expect to use about
200 OCPUs:
<Number_of_OCPUs> = 1024 / (10 * 0.5) = 204.8
You can allocate 200 OCPUs in various ways. For example, you can select an executor shape of VM.Standard2.8 and 25 total executors for 8 * 25 = 200 total OCPUs.
This formula is a rough estimate and the run times might differ. You can better estimate the
actual workload's processing rate by loading the Application and viewing the history of
Application Runs. This history lets you to see the number of OCPUs used, total data
processed, and run time, letting you to estimate the resources you need to meet the
SLAs. From there, you estimate the amount of data a Run processes and size the Run
appropriately.
Note
The number of OCPUs is limited by the VM shape you chose and the value set in the tenancy for
The number of OCPUs is limited by the VM shape you chose and the value set in the tenancy for
VM.Total
. You can't use more VMs
across all VM shapes than the value in VM.Total. For example, if each VM shape is
set to 20, and VM.Total
is set to 20, you can't use more than 20
VMs across all the VM shapes. With flexible shapes, where the limit is measured as
cores or OCPUs, 80 cores in a flexible shape is equal to 10 VM.Standard2.8 shapes.
See Service Limits for more
information.Flexible Compute Shapes
Data Flow supports flexible compute shapes for Spark jobs.
The following flexible compute shapes are supported:
- VM.Standard3.Flex (Intel)
- VM.StandardE3.Flex (AMD)
- VM.StandardE4.Flex (AMD)
- VM.Standard.A1.Flex (Arm processor from Ampere)
When you create an application or edit an application, select the flexible shape for the
driver and executor. For each OCPU selection, you can select the flexible memory option.
Note
The driver and executor must have the same shape.
The driver and executor must have the same shape.
Migrating Applications from VM.Standard2 Compute Shapes
Follow these steps when migrating your existing Data Flow applications from VM.Standard2 to flexible compute shapes.