This is quite a common question in Hadoop interviews; let us understand why MapReduce is slower in comparison to the other processing frameworks:
MapReduce is slower because:
It is batch-oriented when it comes to processing data. Here, no matter what, you would have to provide the mapper and reducer functions to work on data.
During processing, whenever the mapper function delivers an output, it will be written to HDFS and the underlying disks. This data will be shuffled and sorted, and then be picked up for the reducing phase. The entire process of writing data to HDFS and retrieving it from HDFS makes MapReduce a lengthier process.
In addition to the above reasons, MapReduce also uses Java language, which is difficult to program as it has multiple lines of code.
Big Data Hadoop Certification Training Course
Master Big Data and Hadoop EcosystemEXPLORE COURSEBig Data Hadoop Certification Training Course