in Big Data | Hadoop by
Q:
What is a Shuffle operation in Spark?

1 Answer

0 votes
by
Shuffle operation is used in Spark to re-distribute data across multiple partitions.

It is a costly and complex operation.

In general a single task in Spark operates on elements in one partition. To execute shuffle, we have to run an operation on all elements of all partitions. It is also

 

called all-to-all operation.
Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

0 votes
asked Jan 13, 2020 in Big Data | Hadoop by sharadyadav1986
0 votes
asked Aug 26, 2019 in NoSQL - Database Revolution by Venkatshastri
...