Custom Spark Jobs
Run a custom Spark job.
Product Selector
Run a custom Spark job.
Use this job when you want to run a custom spark job.
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_)
<= 128 characters
Match pattern: ^[A-Za-z0-9_\-]+$
Name of the resource uploaded to Blob store. This should match with the Blob name
>= 1 characters
Application's main class (for Java/Scala)
Blob resource (files) to be placed in the working directory of each executor
Blob resource (.zip, .egg, .py files) to place on the PYTHONPATH for Python apps
Spark configuration settings.
Default: {"key":"spark.executor.memory","value":"2g"}{"key":"spark.driver.memory","value":"2g"}{"key":"spark.logConf","value":"true"}{"key":"spark.eventLog.enabled","value":"true"}{"key":"spark.eventLog.compress","value":"true"}{"key":"spark.scheduler.mode","value":"FAIR"}
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Additional options to pass to the Spark Submit when running this job.
Java options to pass to Spark driver/executor
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Enables verbose reporting for SparkSubmit
Default: true
Remove all temp files on exit
Default: true
Set environment variables for driver
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Default: custom_spark_job
Allowed values: custom_spark_job