in Big Data | Hadoop by

How many numbers of reducers run in Map-Reduce Job?

1 Answer

0 votes
by

In Hadoop MapReduce, Mapper processes each input record (from RecordReader ) and generates key-value pairs. Reducer takes a set of an intermediate key-value pair generated by Mapper as input and runs a reduce function on each of them to generate output. Reduceroutput is the final output, which is stored in HDFS . Reducer performs aggregation/summation sort of computation.

With the help of Job.setNumreduceTasks (int) the user set the number of reducers for the job. The right number of reducers is calculated by:

0.95 or 1.75 multiplied by (<no. of nodes>*<no. of maximum container per node>)

As the map finishes, with 0.95 all the reduces can launch immediately and start transferring map outputs. Faster nodes will finish the first round of reduces with 0.75 and launch the second wave of reduces which do much better job of load balancing.

When Hadoop framework increases reducers then:

Framework overhead increases.

Load balancing increases.

The cost of failures decreases.

...