Looking for the old docs site? You can still view it for a limited time here.

Install The Spark History Server

While logs from the Spark driver and executor pods can be viewed using kubectl logs [POD_NAME], executor pods are deleted at their end of their execution, and driver pods are deleted by Fusion on a default schedule of every hour. In order to store and view Spark logs in a more long-term fashion, you can install the Spark History Server into your Kubernetes cluster and configure Spark to write logs in a manner that will persist.

Spark History Server can be installed via its default Helm chart:

helm install stable/spark-history-server --values values.yaml

For related topics, see Spark Operations.