Monitor and Troubleshoot Loads
All data
      load operations done using the PL/SQL package  are logged in the tables
         DBMS_CLOUDdba_load_operations and
      user_load_operations:
               
- 
dba_load_operations: shows all load operations. - 
user_load_operations: shows the load operations in your schema. 
Query these tables to see information about ongoing and completed data loads.
         Use a SELECT statement with a WHERE clause predicate on
         the TYPE column to show load operations with the type
            COPY.
               
For example:
SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';
TABLE_NAME OWNER_NAME TYPE STATUS    START_TIME               UPDATE_TIME              LOGFILE_TABLE BADFILE_TABLE 
---------- ---------- ---- --------- ------------------------ ------------------------ ------------- ------------- 
TREEDATA   ADMIN      COPY COMPLETED 2022-10-20T23:15:19.990Z 2022-10-20T23:15:24.238Z COPY$1_LOG    COPY$1_BAD    
The LOGFILE_TABLE column shows the name of the table you can
         query to look at the log of a load operation. For example, the following query shows the
         log for this load operation:
               
SELECT * FROM COPY$1_LOG;The column BADFILE_TABLE shows the name of the table you can
         query to look at the rows with errors during loading. For example, the following query
         shows the rejected records for the load operation. If there are not any rejected rows in
         the operation, the query does not show any rejected rows.
               
SELECT * FROM COPY$1_BAD;Depending on the errors shown in the log and the rows shown in the
            BADFILE_TABLE file you can correct the error by specifying the correct
         format options in DBMS_CLOUD.COPY_DATA.
               
When the format type is "datapump", any rows
         rejected up to the specified rejectlimit are logged in the log file, but a
            BADFILE_TABLE is not generated.
               
By default the LOGFILE_TABLE and
            BADFILE_TABLE files are retained for two days and then automatically
         removed. You can change the number of retention days with the logretention
         option for the format parameter.
               
See DBMS_CLOUD Package Format Options for information on the enablelogs,
            logdir, logprefix, and logretention
         options for the format parameter.
               
See DELETE_ALL_OPERATIONS Procedure for information on clearing the log files.
See Track DBMS_CLOUD Load Operations for information on the dba_load_operations and
            user_load_operations views.
               
Monitor and Troubleshoot Bulk File Operations
See Monitor and Troubleshoot Bulk File Loads for information on monitoring and troubleshooting for bulk file operations.
Monitor and Troubleshoot ORC, Parquet, or Avro File Loading
As with other data files, ORC, Parquet, and Avro data loads generate logs
            that are viewable in the tables dba_load_operations and
               user_load_operations. Each load operation adds a record to
               dba[user]_load_operations that indicates the
            table containing the logs.
                  
The log table provides summary information about the load.
For ORC, Parquet, or Avro files, when the format
               parameter type is set to the value orc,
                  parquet or avro the
                  BADFILE_TABLE table is always empty.
                     
PRIMARY KEYconstraint errors throw anORAerror.- If data for a column encounters a conversion error, for example, the
                  target column is not large enough to hold the converted value, the value for the
                  column is set to 
NULL. This does not produce a rejected record. 
Parent topic: Load Data from Files in the Cloud