How is Spark structured streaming managed during streaming?

Like a Spark batch, Spark structured streaming also provides fault tolerance. For example, when an executor dies, Spark streaming creates another executor and shifts the tasks to the new executor.

However, streaming has more complex problems like its not single file where it can read the data back if the partition is lost so, spark structure streaming checks that source stream is replayable that it must allow tracking the data via offsets eg. Kafka It also allows fault tolerance at the driver level, where it checkpoints the stream to hdfs directory and can start the stream again where it failed using the source replayable technique. (Note: Socket does not support the source replayable technique.)