Copy File from Cloud HDFS to Local Computer

While I work with big data technologies like Spark and a large dataset I like to work on the university cloud, where everything is faster. However, for different reasons sometimes I have to move to local computer (my laptop). This time the reason is, I need to use a package of Python matplotlib, named baseplot, which is not installed on the cloud. However, the data I need to work on is on the cloud HDFS. Therefore, I need to copy the data from HDFS to my local laptop. This can be done in two simple steps:

Step 1: copy data from HDFS to remote local (not HDFS)
Step 2: copy data from remote local to local (my laptop)
Continue reading “Copy File from Cloud HDFS to Local Computer”

*.csv File Preprocessing Using Pandas

For any machine learning or data mining purpose, the first job is to pre-process the data so that we can us the data for the original purpose. In lots of cases we have the raw data in *csv format, which we need to import and preprocess using the language we are using for the particular job. Python is one of the most popular language for this purpose. For this article I will use Python and one very popular library named pandas to show how we can use pandas for read, import and preprocess a *.csv file.

We have a *csv file which we want to pre-process. This is a file with a large number of columns, so it is not a good idea to display it here. I am showing a part of it.

Continue reading “*.csv File Preprocessing Using Pandas”

Understanding MapReduce in My Way : Starting with Word Count

Word Count problem is known as the ‘Hello World’ for MapReduce. In this article I will explain, how I understand different bits of MapReduce in my way. The code provided in this article is trivial and is available in lots of places including the official MapReduce website. My concern would be to focus on how it really works.
Continue reading “Understanding MapReduce in My Way : Starting with Word Count”