Categories

Mar 14 in Spark Sql
Q: What are the different storage/persistence levels in Apache Spark in Spark?

1 Answer

Mar 14
Apache Spark can persist the data from different shuffle operations. It is always suggested that call RDD call persist method() and it is only when they reuse it. There are various levels of persistence in Spark for storing RDDs in memory or disk or can store in a combination of both.

Levels of storage/persistence in Apache Spark are:

MEMORY_ONLY_SER

MEMORY_AND_DISK_SER, DISK_ONLY

MEMORY_ONLY

MEMORY_AND_DISK

OFF_HEAP
Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

Madanswer
Mar 8 in Spark Sql
Mar 14 in Spark Sql
Jan 13 in Big Data | Hadoop
...