Running Wordcount on Hadoop single node cluster


$ start-all.sh

$ jps

$ cd Desktop

$ sudo mkdir data

$ cd data

$ jps >> testing.txt

$ cd /usr/local/hadoop

$ bin/hdfs dfs -mkdir /user

$ bin/hdfs dfs -mkdir /user/chaalpritam

$ bin/hdfs dfs -put /home/chaalpritam/Desktop/data input

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input output

$ bin/hdfs dfs -cat output/*



2 comments:

  1. How do i know if the mapreduce completed successfully ? I got this:
    15/02/20 19:08:47 INFO mapreduce.Job: map 100% reduce 100%
    15/02/20 19:08:49 INFO mapreduce.Job: Job job_1424475703024_0004 failed with state FAILED due to: Task failed task_1424475703024_0004_m_000000
    Job failed as tasks failed. failedMaps:1 failedReduces:0

    ReplyDelete
  2. Nevermind i went back and followed your tutorial exactly and... THANK YOU SO MUCH IT WORKS !!!! OMG installing hadoop is ridiculous but it now works thank goddd . Thank you again

    ReplyDelete

 

Flickr Photostream

Twitter Updates