A computer is a powerful machine when it comes to processing large amounts of data faster and efficiently. But considering the no limit nature of data, the power of a computer is limited. In the machine learning context, a machine or computer can efficiently handle only as much data as its RAM is capable of holding, which is very limited. There is a limit to which a machine can be upgraded.\n\nBut having multiple machines that work together is a whole different story. Cluster computing combines the computing power of multiple machines, sharing its resources for handling tasks that are too much for a single machine.\n\nApache Spark is a framework that is built around the idea of cluster computing. It allows data-parallelism with great fault-tolerance to prevent data loss. It has high-level APIs for programming languages like Python, R, Java and Scala. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.\n\nIn this article, we will learn to set up an Apache Spark environment on Amazon Web Services.\n\nSetting Up Spark in AWS\n\nThe first thing we need is an AWS EC2 instance. We have already covered this part in detail in another article. Follow the link below to set up a full-fledged Data Science machine with AWS.\n\n\n Setting Up A Completely Free Jupyter Server For Data Science With AWS\n\n\nMake sure to perform all the steps in the article including the setting up of Jupyter Notebook as we will need it to use Spark. Once you are done through the article follow along here.\n\nInstalling Dependencies\n\nTo install spark we have two dependencies to take care of. One is java and the other is scala. Let's install both onto our AWS instance.\n\nConnect to the AWS with SSH and follow the below steps to install Java and Scala.\n\nTo connect to the EC2 instance type in and enter :\n\nssh -i "security_key.pem" ubuntu@ec2-public_ip.us-east-3.compute.amazonaws.com\n\nMake sure to put your security key and your public IP correctly.\n\n\n\nOn EC2 instance, update the packages by executing the following command on the terminal:\n\nsudo apt-get update\n\nInstall Java with the following command\n\nsudo apt install default-jre\n\nVerify the installation by typing java --version.\n\nYou will be able to see a similar output as follows:\n\n\n\nInstall Scala by typing and entering the following command :\n\nsudo apt install scala\n\nVerify by typing scala -version.\n\n\n\nWe also need to install py4j library which enables Python programs running in a Python interpreter to dynamically access Java objects in a Java Virtual Machine.\n\nTo install py4j make sure you are in the anaconda environment. You will see \u2018(base)\u2019 before your instance name if you in the anaconda environment. If not type and enter conda activate.To exit from the anaconda environment type\u00a0conda deactivate\n\nOnce you are in conda, type\u00a0pip install py4j to install py4j.\n\nInstalling Spark\n\nHead to the downloads page of Apache Spark at https:\/\/spark.apache.org\/downloads.html and choose a specific version and hit download, which will then take you to a page with the mirror links. Copy one of the mirror links and use it on the following command to download the spark.tgz file on to your EC2 instance.\n\nwget http:\/\/mirrors.estointernet.in\/apache\/spark\/spark-2.4.3\/spark-2.4.3-bin-hadoop2.7.tgz\n\n\n\nExtract the downloaded tgz file using the following command and move the decompressed folder to the home directory.\n\nsudo tar -zxvf spark-2.4.3-bin-hadoop2.7.tgz\nmv spark-2.4.3-bin-hadoop2.7 \/home\/ubuntu\/\n\nSet the SPARK_HOME environment variable to the Spark installation directory and update the PATH environment variable by executing the following commands\n\nexport SPARK_HOME=\/home\/ubuntu\/spark-2.4.3-bin-hadoop2.7\nexport PATH=$SPARK_HOME\/bin:$PATH\nexport PYTHONPATH=$SPARK_HOME\/python:$PYTHONPATH\n\nThe Spark Environment is ready and you can now use spark in Jupyter notebook.\n\nMake sure the PATH variable is set correctly according to where you installed your applications. If your overall PATH environment looks like what is shown below then we are good to go,\n\nPATH:\n\n\/home\/ubuntu\/spark-2.4.3-bin-hadoop2.7\/bin:\/home\/ubuntu\/anaconda3\/condabin:\/bin:\/usr\/bin:\/home\/ubuntu\/anaconda3\/bin\/\n\nPYTHONPATH:\n\n\/home\/ubuntu\/spark-2.4.3-bin-hadoop2.7\/python:\n\nType and enter pyspark on the terminal to open up PySpark interactive shell:\n\n\n\nHead to your Workspace directory and spin Up the Jupyter notebook by executing the following command.\n\njupyter Notebook\n\nOpen the Jupyter on a browser using the public DNS of the ec2 instance.\n\nhttps:\/\/ec2-19-265-132-102.us-east-2.compute.amazonaws.com:8888\n\n\n\nImport the PySpark module to verify that everything is working properly.\u00a0\n\nHappy coding!