Apache Spark has the capability to compatible with Hadoop. It leads to technologies as very powerful combinations.
The components of Hadoop can be used alongside spark in different ways.
MapReduce: In the same Hadoop cluster the Spark can be used with MapReduce as a framework processing.
Real-Time & Batch Processing: Both MapReduce & Spark can be used together. Of those Spark is used for real-time processing and MapReduce is used for batch processing.
HDFS: The Spark ability is to run on HDFS top to level the replicated storage.
YARN: YARN is termed as the next generation of Hadoop in which we can run the Spark applications.