Find out how to migrate from Oracle Big Data Cloud Compute Edition (BDCE or BDC) to Big Data Service
Migrating is done in several steps. You can migrate your artifacts to OCI Big Data Service from BDC on Oracle Cloud Infrastructure Classic or BDC on Oracle Cloud Infrastructure.At a high level, you do the following:
Export your existing cloud resources from BDC to Object Storage.
Import the exported cloud resources from Object Storage to Big Data Service
Prerequisites
Before you begin, ensure the following:
You are a valid user to a compartment in Big Data Service
You are enabled to do the following:
Access the OCI console using your credentials
Create a bucket in Oracle Object Storage so that you can copy the HDFS data. For information about Oracle Object Storage, see Overview of Object Storage.
You have the following OCI parameter values with you:
Value
Details
Tenancy ID
The OCID of the tenancy. For example, ocid1.tenancy.oc1..aaaaaaaa5syd62crbj5xpfajpmopoqasxy7jwxk6ihopm5vk6bxkncyp56kc. For more information, see Where to Get the Tenancy's OCID and User's OCID
Required for an application user. For example, 03:8c:ef:51:c8:fe:6b:22:0c:5d:3c:43:a8:ff:58:d9. For information about generating and uploading the API signing key, see the following topics:
Required if you have generated the key pair with a passphrase.
Fingerprint for the signing key
The fingerprint and passphrase of the signing key are created while generating and uploading the API signing key. For more information, see How to Get the Key's Fingerprint.
The resources that you can export from Big Data Cloud Compute Edition (BDC) is as follows:
Artifact in BDC
Exported Artifacts
Artifacts in OCI Big Data Service (BDS)
Data in HDFS
Copied into OCI Object Store at oci://<bucket>@<tenancy>/<exportedHdfsDir>
For example: oci://myStorageBucket@myTenancy/exportedHdfsDir
Copy the exported data from the OCI Object Store to target BDS HDFS directories.
Data in OCI-Classic Object Store
Note: This artifact doesn't apply to Oracle Big Data Cloud on Oracle Cloud Infrastructure.
Copied into OCI Object Store at oci://<bucket>@<tenancy>/<exportedObjDir>
For example: oci://myStorageBucket@myTenancy/exportedObjDir
Hive Metadata
Generate the Hive DDL statements on the BDC cluster.
Copy the Hive DDL statements from the BDC cluster into the BDS cluster, and execute them.
Zeppelin Notebooks
Export the Zeppelin notebook definitions as a .tar.gz file from /user/zeppelin/notebook in HDFS. This is done using a script provided by Oracle.
Currently, importing Zeppelin Notebooks is not supported in BDS.
HDFS, YARN, Spark Configuration Files
Export the configuration files as a .tar.gz file using a utility script provided by Oracle.
As BDS has optimized configuration settings for HDFS, YARN, and Spark, you need not import the configuration files and versions from BDC.
Versions of various Open Source components
Export the service version details using Ambari REST API. Customers can also get version details from Ambari (Admin -> Stack and Versions).
Migrating Resources Using WANdisco LiveData Migrator 🔗
Ensure that Port 8020 opens at the destination.
For information about WANdisco LiveData Migrator, click here.
To migrate resources using WANdisco LiveData Migrator, follow these steps:
Install LiveData migrator on any edge of the source cluster by running the following commands:
Copy
wget https://wandisco.com/downloads/livedata-migrator.sh
chmod +x livedata-migrator.sh && ./livedata-migrator.sh
service livedata-migrator status
service hivemigrator status
service livedata-ui status
After the installation and setup of the LiveData migrator is complete, access the UI and create your user account. The URL of the UI is as follows:
http://<LDM-Installation-Host.com>:8081
Do the following to migrate data:
Configure source filesystem.
To add a source filesystem, on your LiveData Migrator dashboard, do the following:
From the Products panel, select the relevant instance.
In the Filesystem Configuration page, click Add source filesystem.
Configure target filesystem.
To add a target filesystem, on your LiveData Migrator dashboard, do the following:
From the Products panel, select the relevant instance.
In the Filesystem Configuration page, click Add target filesystem.
Select Apache Hadoop for Target as BDS cluster and provide the default filesystem path. Make sure that source and target connect to destination on 8020 port.
Create a path mapping.
Path mapping enables migrated data to be stored at an equivalent default location on the target. To create path mappings using the UI, follow these steps:
From the Products list on the dashboard, select the LiveData Migrator instance for which you want to create a path mapping.
From the Migrations menu, select Path Mappings.
At the top right of the interface, click the Add New Path button.
Create a migration.
Migrations transfer existing data from the defined source to a target. To create a new migration from the UI, follow these steps:
Provide a name for the migration.
From your filesystems, select a source and target.
Select the Path on your source filesystem that you want to migrate. Use the folder browser and select the path name you want to migrate. Select the grey folder next to a path name to view its subdirectories.
Migrate the metadata.
To migrate the metadata, follow these steps:
Export Hive metadata from the source BDC cluster. For more information, see Exporting Hive Metadata.
Import the metadata to the destination BDS ODH 1.0 cluster. For more information, see Importing Metadata.
Migrating Resources Using the Distcp Tool 🔗
You can also migrate data and metadata from Big Data Cloud Compute Edition and import them to the Big Data Service using the Distcp tool. Distcp is an open source tool that can be used to copy large data sets between distributed file systems within and across clusters.
Run the following commands to set the HDFS service in safe mode:
Copy
$ sudo su hdfs -c 'hdfs dfsadmin -safemode get'
Safe mode is OFF
$ sudo su hdfs -c 'hdfs dfsadmin -safemode enter'
Safe mode is ON
Do the following:
As a root user, copy your OCI private key (for example: ~/.oci/oci_api_key.pem) to the /home/opc/.oci directory on the BDC cluster.
Copy
sudo su -
cp <private-key> /home/opc/.oci/oci_api_key.pem
Change the private key file permissions to read-only.
Copy
chmod 444 /home/opc/.oci/oci_api_key.pem
Change the ownership of the private PEM key file to hdfs user.
Copy
chown -R hdfs:hdfs /home/opc/.oci/oci_api_key.pem
Create a directory named data that the HDFS user can access.
Copy the OCI private key from the /home/opc/.oci directory to the /data directory.
Copy
cp /home/opc/.oci/oci_api_key.pem /data
As a root user, change the private key file in the /data directory to read-only.
chmod 444 /data/oci_api_key.pem
If the BDC cluster has multiple nodes, ensure that the OCI private key is available in all the nodes with the same path. You can use otools user to copy files to all the nodes. However, ensure that the file is readable by otools. After you copy the files to all nodes, set the original permissions back.
To install the OCI private key on all the hosts, as a otools user, execute the following commands in the given sequence for each host:
Set up the distcp tool. The distcp tool requires an updated oci-hdfs driver. Therefore, you must download the driver to the host where the distcp tool needs to be run. To download the driver, use the following commands:
Run the following commands to create a directory named hdfsjar under /data and to navigate to the directory.
Use hadoop fs -cp to export HDFS data from a single node cluster to Object Store as an HDFS user.
Copy
sudo su - hdfs
hadoop fs -Dfs.oci.client.auth.fingerprint=<fingerprint> \
-Dfs.oci.client.auth.pemfilepath=<oci_pem_key> \
-Dfs.oci.client.auth.passphrase=<passphrase> \
-Dfs.oci.client.auth.tenantId=<OCID for Tenancy> \
-Dfs.oci.client.auth.userId=<OCID for User> \
-Dfs.oci.client.hostname=<HostName> \
-Dfs.oci.client.proxy.uri=http://<proxyhost>:<proxyport> \
-cp <hdfs-dir-to-copy> oci://<bucket>@<namespace>/<dir>/
Use hadoop distcp to export HDFS data from multi-node clusters to Object Store. Validate it against the specific directory. For example, use /user/oozie/ for <hdfs-dir-to-copy> in the following command and then run it for the entire HDFS (/).
Update the TARGET_HDFS_PATH and TARGET_OBJECTSTORE_PATH in the hive_migrate.properties script to the location where the Hive table data is available after the migration. You need not escape the forward slashes in the path.
Copy
#!/bin/bash
#
# Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
# Properties file to overide
#
# This file will be sourced by the generate_target_ddl_from_source.sh
# to customize/overide properties
# SRC_HDFS_PATH and SRC_OBJECTSTORE_PATH will be derived automatically from the cluster.
# You will be customizing
# RUNDIR , TARGET_HDFS_PATH, TARGET_OBJECTSTORE_PATH
#
# ------------------------------------------------------------
# Location to hold intermediate and final scripts to be generated.
# You must have write privileges on this directory
export RUNDIR=/tmp/hivemigrate
# ------------------------------------------------------------
# Specify current hive.metastore.warehouse.dir from Hive Config.
#
export SRC_WAREHOUSE_DIR=/apps/hive/warehouse
# ------------------------------------------------------------
# Modify expected target BDS hdfs path in your hive DDL script
#
export TARGET_HDFS_PATH=hdfs://testbdmn0.bmbdcsad1.bmbdcs.oraclevcn.com:8020/warehouse/tablespace/external/hive
# ------------------------------------------------------------
# Modify expected target BDS Object Store path in your hive DDL script
#
export TARGET_OBJECTSTORE_PATH=oci://bdc2odh@bdsdevcluster/warehouse
Create a script named generate_target_ddl_from_source.sh in the ~opc home directory with the following code. This script generates the DDL statements that you can run on the target cluster to create the hive metadata.
Copy
#!/bin/bash
#
# Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
#
# Script to generate queries that can be run on a target cluster for hive metadata migration.
#
# Usage :
# sudo generate_target_ddl_from_source.sh
# The generated scripts are available under /tmp/hivemigrate/generatedSQL_target_Hive
# - 1_CREATE_DB.hql
# - 2_CREATE_TABLE.hql
# - 3_ADD_PARTITION.hql
#
# These scripts can be run on a target cluster AS IS to create hive metadata that matches the source..
#
#------------------------------------------------------------
# Set Environment
detect_basedir()
{
DIRNAME=`dirname $0`
BASENAME=`basename $0`
BASEDIR=$(cd "$DIRNAME" ; pwd)
}
detect_basedir
#------------------------------------------------------------
# Initilize any default config properties that has been overriden in
# ${BASEDIR}/hive_migrate.properties
# DO NOT MODIFY. PLEASE OVERRIDE ANY PARAMETER IN ${BASEDIR}/hive_migrate.properties
export RUNDIR=/tmp/hivemigrate
export TARGET_HDFS_PATH=hdfs://Some_cluster_name/tmp
export TARGET_OBJECTSTORE_PATH=oci://mybucket@mytenancy/tmp/
#
#------------------------------------------------------------
if [ -f ${BASEDIR}/hive_migrate.properties ]; then
echo -e "*** Sourcing Overide proprties file ${BASEDIR}/hive_migrate.properties"
source ${BASEDIR}/hive_migrate.properties
else
echo -e "*** Warning : Missing Overide proprties file ${BASEDIR}/hive_migrate.properties"
echo -e "*** Continue with default settings\n"
fi
#
usage()
{
echo -e "Usage : ${BASENAME} \n"
echo -e "** This script is to be run on your source cluster and can be run as many times as you wish."
echo -e "** It cleans up any older generated scripts located in ${RUNDIR} before starting \n"
echo -e "** The generated output hive query scripts will be available in RUNDIR: ${RUNDIR} ...\n"
echo -e "*** Modify RUNDIR if you would like to override\n"
echo -e "** Currently TARGET_HDFS_PATH is set to hdfs://Some_cluster_name/"
echo -e "*** Modify TARGET_HDFS_PATH to match your hdfs path in Hive DDL \n"
}
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo -e "ERROR: *** This script ${BASENAME} must be run as root" 1>&2
usage
exit 1
fi
if [ "$1" != "" ]; then
usage
exit 1
fi
#------------------------------------------------------------
#
# Figure out the HDFS Path in source cluster.
# Depending on HA or non-HA cluster the hdfs path syntax is different
if ! grep -q "dfs.ha.namenodes.mycluster" /etc/bdcsce/conf/blueprint/*.json
then
echo -e "INFO:** Source: Non-HA cluster..."
nn_fqdn=$(hostname -f)
SRC_HDFS_PATH=hdfs://${nn_fqdn}:8020${SRC_WAREHOUSE_DIR}
else
echo -e 'INFO:** Source: HA cluster...'
SRC_HDFS_PATH=hdfs://mycluster${SRC_WAREHOUSE_DIR}
fi
echo -e "INFO:** Assumed source hdfspath : $SRC_HDFS_PATH"
#------------------------------------------------------------
#
# Figureout Source Object source Patch
source /etc/bdcsce/conf/datasources.properties
export SRC_OBJECTSTORE_PATH=${bdcsce_default_objectstore_url}
#------------------------------------------------------------
# Step-1 : Setup/Clean source and target directories to hold intermediate and final scripts
#Path to store intermediate hive queries
outputSourcePath=${RUNDIR}/generatedSQL_source_Hive
rm -rf $outputSourcePath
mkdir -m 777 -p $outputSourcePath
#Path to store final target hive queries to be used at target system
outputTargetPath=${RUNDIR}/generatedSQL_target_Hive
rm -rf $outputTargetPath
mkdir -m 777 -p $outputTargetPath
#------------------------------------------------------------
# Step-2 : Extract Create DB statements from MetaStore to Run in Target Hive Installation.
#
mysql -Dhive --skip-column-names -e"
-- Query for creating databases
SELECT
CONCAT('CREATE DATABASE IF NOT EXISTS ',
D.NAME,
'\;') AS CREATE_DB_STATEMENTS
FROM
DBS D;" > $outputTargetPath/1_CREATE_DB.hql
ret=$?
if [ $ret -ne 0 ];
then
echo -e "Error:** - MySQL Error code is $ret while trying to extract DB creation scripts"
exit $ret
else
echo -e "** DB Creation extraction script $outputTargetPath/1_CREATE_DB.hql completed successfully"
fi
#------------------------------------------------------------
# Extract Show Create Table statements from MetaStore to Run in Source Hive Installation.
mysql -Dhive --skip-column-names -e"
-- Query for Creating Show CT Statements to be run in the Source Hive Installation
SELECT
CONCAT('USE ', D.name, '\; ','SHOW CREATE TABLE ',
T.tbl_name,
'\;') AS SHOW_CT_STATEMENTS
FROM
TBLS T
INNER JOIN
DBS D ON D.DB_ID = T.DB_ID
WHERE
T.TBL_TYPE != 'INDEX_TABLE';" > $outputSourcePath/1_SHOW_CREATE_TABLE.hql
ret=$?
if [ $ret -ne 0 ];
then
echo -e "Error:** - MySQL Error code is $ret while trying to extract SHOW CT scripts"
exit $ret
else
echo -e "\n\n"
echo -e "** SHOW CREATE TABLE extraction script $outputSourcePath/1_SHOW_CREATE_TABLE.hql completed successfully"
fi
#------------------------------------------------------------
# Run Show Create Table Command on Source Hive Installation to create target Hive CT scripts
echo -e "** Running show create table in source cluster"
echo -e "** This may take a while...Depends on hive metadata size\n\n"
sudo -u hive hive -f $outputSourcePath/1_SHOW_CREATE_TABLE.hql > $outputTargetPath/2_CREATE_TABLE_temp.hql
ret=$?
if [ $ret -ne 0 ];
then
echo -e "Error - Hive Error code is $ret while trying to execute SHOW CT scripts"
exit $ret
else
echo -e "\n\n"
echo -e "** SHOW CT execution script $outputTargetPath/2_CREATE_TABLE_temp.hql completed successfully"
fi
# Add semicolon suffix ; after CT statements
echo -e "USE default;" > $outputTargetPath/2_CREATE_TABLE.hql
cat $outputTargetPath/2_CREATE_TABLE_temp.hql | tr '\n' '\f' | sed -e 's/)\fCREATE/);\fCREATE/g' | tr '\f' '\n' >> $outputTargetPath/2_CREATE_TABLE.hql
echo -e ";">>$outputTargetPath/2_CREATE_TABLE.hql
rm -rf $outputTargetPath/2_CREATE_TABLE_temp.hql
echo -e "** Create Table execution script $outputTargetPath/2_CREATE_TABLE.hql created successfully"
#------------------------------------------------------------
# Convert Managed Table to External table for HIVE 3
#
echo -e "INFO:** Convering Managed Table to External table for HIVE 3 in CT $outputTargetPath/2_CREATE_TABLE.hql scripts"
sed -i "s/^CREATE TABLE/CREATE EXTERNAL TABLE/g" $outputTargetPath/2_CREATE_TABLE.hql
#------------------------------------------------------------
# Replace HDFS Path
#
echo -e "INFO:** Replacing $SRC_HDFS_PATH with $TARGET_HDFS_PATH in CT $outputTargetPath/2_CREATE_TABLE.hql scripts"
sed -i "s,$SRC_HDFS_PATH,$TARGET_HDFS_PATH,g" $outputTargetPath/2_CREATE_TABLE.hql
#------------------------------------------------------------
# Replace Target Object Store Path
#
echo -e "INFO:** Replacing $SRC_OBJECTSTORE_PATH with $TARGET_OBJECTSTORE_PATH in CT $outputTargetPath/2_CREATE_TABLE.hql scripts"
sed -i "s,$SRC_OBJECTSTORE_PATH,$TARGET_OBJECTSTORE_PATH,g" $outputTargetPath/2_CREATE_TABLE.hql
echo -e "INFO:** Replaced $SRC_OBJECTSTORE_PATH with $TARGET_OBJECTSTORE_PATH in CT $outputTargetPath/2_CREATE_TABLE.hql scripts"
#------------------------------------------------------------
# Extract Add Partition statements if any from MetaStore to Run in Target Hive Installation.
mysql -Dhive --skip-column-names -e"
-- Query to create add partition scripts to run in target Hive Installation
SET @oldNameNode = '$SRC_HDFS_PATH'; -- Old NameNode address
SET @newNameNode = '$TARGET_HDFS_PATH'; -- New NameNode address
SELECT
REPLACE(add_partition_query,
@oldNameNode,
@newNameNode) AS add_partition_query
FROM
(SELECT
CONCAT('USE ', D.name, '\; ', ' ALTER TABLE ', T.TBL_NAME, ' ADD PARTITION (', GROUP_CONCAT(PK.PKEY_NAME, '=', '\'', PKV.PART_KEY_VAL, '\''
ORDER BY PK.INTEGER_IDX), ') LOCATION \'', S.location, '\'\; ') AS add_partition_query
FROM
TBLS T
INNER JOIN DBS D ON T.DB_ID = D.DB_ID
INNER JOIN PARTITION_KEYS PK ON T.TBL_ID = PK.TBL_ID
INNER JOIN PARTITIONS P ON P.TBL_ID = T.TBL_ID
INNER JOIN PARTITION_KEY_VALS PKV ON P.PART_ID = PKV.PART_ID
AND PK.INTEGER_IDX = PKV.INTEGER_IDX
INNER JOIN SDS S ON P.SD_ID = S.SD_ID
GROUP BY P.PART_ID
ORDER BY D.name) alias1;" > $outputTargetPath/3_ADD_PARTITION.hql
ret=$?
if [ $ret -ne 0 ];
then
echo -e "Error - MySQL Error code is $ret while trying to extract ADD PARTITION scripts"
exit $ret
else
echo -e "\n\n"
echo -e "** ADD PARTITION extraction script $outputTargetPath/3_ADD_PARTITION.hql completed successfully"
fi
echo -e "\n\n"
echo -e "INFO: ** Here are your Target installation scripts"
ls -al $outputTargetPath/*
echo -e "\n\n"
echo -e "INFO: ** Run these hql scripts on your Target hive shell..."
exit 0
As root user, run the generate_target_ddl_from_source.sh script.
Copy
sudo ./generate_target_ddl_from_source.sh
The following scripts are generated under /tmp/hivemigrate/generatedSQL_target_Hive:
1_CREATE_DB.hql
2_CREATE_TABLE.hql
3_ADD_PARTITION.hql
During import, you copy these scripts from /tmp/hivemigrate/generatedSQL_target_Hive to a master node in the target BDS cluster with root and database access.
You can export service configurations from the source cluster and use them as reference for the destination cluster for any custom configuration changes that is used in the source cluster.
To export zepplin notebooks, service configurations, and versions, follow these steps:
Stop the Hive, Zeppelin, and Spark services.
Access the Ambari console using the following Ambari IP:
You must review and update code to use latest APIs in Spark. Spark and Hive use different catalog in BDS. To access table from Hive, the catalog must be updated.
In Big Data Service, by default, Hive creates ACID tables. Spark does not work on ACID tables. You must create external tables to access Hive and Spark.
Compare the configuration file created with exportBDC.py in BDC with the spark configuration file in Big Data Service custom configuration changes.
Set up a fresh target environment on Big Data Service with the same BDC Hadoop version (Hadoop 2.7.x) as the source cluster.
Note
Note the following:
Define the Big Data Service cluster on OCI with the same size as the source BDC cluster. However, you must review your computing and storage needs before deciding the size of the target cluster.
For Oracle Cloud Infrastructure VM shapes, see Compute Shapes. BDC does not support all shapes.
If any software other than the BDC stack is installed on the source system using the bootstrap script or some other method, you must install and maintain that software on the target system as well.
Copy the PEM private key (oci_api_key.pem) file to all the nodes of the Big Data Service cluster, and set the appropriate permissions.
As an HDFS user, move the data from Object Store to the target HDFS.
Use hadoop fs -cp to import HDFS data from Object Store to a destination BDS cluster as an HDFS user.
Copy
sudo su - hdfs
hadoop fs -Dfs.oci.client.auth.fingerprint=<fingerprint> \
-Dfs.oci.client.auth.pemfilepath=<oci_pem_key> \
-Dfs.oci.client.auth.passphrase=<passphrase> \
-Dfs.oci.client.auth.tenantId=<OCID for Tenancy> \
-Dfs.oci.client.auth.userId=<OCID for User> \
-Dfs.oci.client.hostname=<HostName. Example: https://objectstorage.us-phoenix-1.oraclecloud.com/> \
-Dfs.oci.client.proxy.uri=http://<proxyhost>:<proxyport> \
-cp oci://<bucket>@<tenancy>/<dir>/ <hdfs-dir-to-copy>
Use hadoop distcp to import HDFS data from Object Store to a multi-node BDS cluster. Validate it against specific directory. For example, use /user/oozie for <hdfs-dir-to-copy> in the following command first, and then run it for the entire HDFS (/).
Re-configure hive.metastore.warehouse.dir to <exportedHdfsDir>/apps/hive/warehouse in hive-site.xml in the BDS cluster.
If you copied your HDFS data to hdfs://tmp, configure the hive.metastore.warehouse.dir property as /tmp/apps/hive/warehouse.
Validating the Migration 🔗
After migrating the resources, verify that the same set of hive tables are present in the target cluster as in the source cluster.
Connect to the hive shell.
Copy
hive
Run the following command to list the tables:
Copy
show tables;
Run the following commands to query the table:
Copy
SELECT * FROM airports LIMIT 10;
Run the following command to verify the HDFS and Object Store data.
Copy
hadoop fs -du -s /tmp/hivemigrate
Check the cluster health by submitting all relevant jobs and getting the expected results. Pick a job that you ran in BDC and run it on the BDS cluster.
Note
Successful run of a job depends not only on the location of the data but also on the configuration settings such as HADOOP_CLASS_PATH, location of the client jars, and so on.