0 votes
in Spark Preliminaries by
what is Spark Streaming's caching.

1 Answer

0 votes
by

Developers can store or cache the stream's data in memory with DStreams. This is helpful if the DStream data will be computed more than once. A DStream's persist() method can be used to do this. For input streams that get data over the network (like Kafka, Flume, Sockets, etc. ), the default persistence level is set to copy the data to two nodes so that if one goes down, the other one will still have the data.

Related questions

0 votes
asked Feb 16, 2023 in Spark Preliminaries by Robindeniel
0 votes
asked Feb 16, 2023 in Spark Preliminaries by Robindeniel
...