Job runtime configuration
In order to see this object within the Managed Fusion UI, it must be associated with an app. To do this, create the object using the
/apps
endpoint.<type>:<id>
path parameter (required)
The resource identifier takes the form<type>:<id>
, such asdatasource:movie-db
,spark:dailyMetricsRollup-counters
, ortask:delete-old-system-logs
.
See Job types below.enabled
body parameter
”True” or “false” to enable or disable the job.triggers
body parameter
This parameter defines one or more conditions that trigger the job. See Job triggers below.
Job types
The job type must be specified as part of the resource identifier inPUT
, POST
, and DELETE
calls. These are the possible job types:
datasource | A job to ingest data according to the specified datasource configuration, such as datasource:movie-db . Datasources are created using the Connector Datasources API or the Managed Fusion UI. See Datasource Jobs. |
spark | A Spark job to process data, such as spark:dailyMetricsRollup-counters . Spark jobs are created using the Spark Jobs API or the Managed Fusion UI. See Spark Jobs. |
Job triggers
Each job can have multiple triggers, and each trigger configuration requiresenabled
and type
parameters.
Additional parameters are required by each trigger type, described below:
Run the job at a regular interval over some time range.
startTime
The date and time at which the job will first run, as an ISO-8601 date-time string.endTime
The date and time at which the job will run last, as an ISO-8601 date-time string.interval
The interval at which the job will run, as an integer whose unit is specified byrepeatUnit
below.repeatUnit
One of:minute
,hour
, orday
.
Run the job on a cron-style schedule.
value
A cron string.
Run the job after another job succeeds or fails.
jobID
The ID of the job that triggers this one.runOnSuccess
“True” to run when the specified job succeeds.runOnFailure
“True” to run when the specified job fails.runOnAbort
“True” to run when the specified job is aborted.