Due to the pandemic, the last few months have seen a monumental rise in cloud adoption. Cloud has provided various cost-effective solutions to organisations to work efficiently and remotely. As per reports, the cloud computing market size is expected to grow from $371.4 billion in 2020 to $832.1 billion by 2025, at a Compound Annual Growth Rate of 17.5%.
A large amount of data is generated every day from different sources across industries and geographies. Big Data is the fuel driving advancements and innovations among organisations around the globe. For instance, tech giants like Google and Amazon harness Big Data to gain a competitive advantage.
Sign up for your weekly dose of what's up in emerging technology.
Over the years, Apache Hadoop has become one of the important tools to work with Big Data. It is a framework which allows for the distributed processing of large data sets across clusters of computers using simple programming models.
We will discuss how to move your data from Apache Hadoop on-premise to Google Cloud.
Why Move To Cloud?
Conventional wisdom dictates enterprises to decide on the deployment model while adopting Apache Hadoop framework. In on-premise full-custom model, businesses purchase commodity hardware and install and operate it themselves.
However, on-premise model comes with its own set of challenges:
- Resources cannot be scaled independently
- Difficult to scale and upgrade clusters
- Large upfront machine costs
Thus, moving to Google Cloud can help developers in saving efforts, costs and time.
Robert Saxby, Product Manager at Google Cloud, said, “As these on-prem deployments of Hadoop and Apache Spark, Presto, and more moved out of experiments and into thousand-node clusters, cost, performance, and governance challenges emerged.” He added, “While these challenges grew on-prem, Google Cloud emerged as a solution for many Hadoop admins looking to decouple compute from storage to increase performance while only paying for the resources they use.”
Steps To Migrate
Google Cloud includes Dataproc, a managed Hadoop and Spark environment. In case, you don’t want to move away from all of the Hadoop tools, Dataproc can be used to run most of the existing jobs with minimal alteration.
The above illustration shows a hypothetical migration from an on-premises system to an ephemeral model on Google Cloud. Below are some of the recommended steps for migrating your workflows from Hadoop on-premise to Google Cloud-
1| Move Your Data First
Firstly, you have to move your data into Cloud Storage buckets and then use backup or archived data to minimise the impact to the existing Hadoop system.
2| Make Proof Of Concept
The next step is to use a subset of data to test and experiment. It is crucial to make a small-scale proof of concept for each job. You can also try new approaches to work with your data. This will help you in adjusting to Google Cloud and other cloud-computing paradigms.
3| Think In Terms Of Specialised, Ephemeral Clusters
The third step is to use the smallest clusters and scope them to single jobs or small groups of closely related jobs. The biggest difference between running an on-premises Hadoop workflow and running the same workflow on Google Cloud is the shift away from monolithic, persistent clusters to specialised, ephemeral clusters. You can spin up a cluster when you need to run a job and then delete it once the job is completed. This approach enables you to tailor cluster configurations for individual jobs.
4| Use The Google Cloud Tools
The last step is to try and use the available Google Cloud tools.
Migrating to Google Cloud from Hadoop on-premise offers a number of benefits, such as built-in support for Hadoop, managed hardware and configuration, simplified version management and flexible job configuration.
Click here to know more about the migration.