Apache Hadoop is based on the concept of MapReduce algorithm. In MapReduce algorithm, Map and Reduce operations are used to process very large data set.
In this concept, Map method does the filtering and sorting of data. Reduce method performs the summarizing of data.
This is a concept from functional programming.
The key points in this concept are scalability and fault tolerance. In Apache Hadoop these features are achieved by multi-threading and efficient implementation of MapReduce.