Skip to main content
GET
/
spark
/
jobs
/
{id}
Get a job's last run
import requests

url = "https://{FUSION HOST}/api/spark/jobs/{id}"

headers = {"Authorization": "Basic <encoded-value>"}

response = requests.get(url, headers=headers)

print(response.text)
{
  "state": "running",
  "jobId": "xamafwrdxnkb",
  "jobConfig": {
    "type": "parallel-bulk-loader",
    "id": "index_synthetic_data",
    "format": "json",
    "path": "gs://lucidworks-example-data/hardware/1000/1000_synthetic.json",
    "outputCollection": "Synthetic_dataset_test",
    "outputIndexPipeline": "Synthetic_dataset_test",
    "clearDatasource": false,
    "defineFieldsUsingInputSchema": true,
    "atomicUpdates": false,
    "cacheAfterRead": false,
    "continueAfterFailure": false,
    "updates": [
      {
        "userId": "docs",
        "timestamp": "2025-08-14T20:50:09.847634657Z"
      }
    ]
  },
  "hostname": "10.64.19.46",
  "result": {
    "jobConfigId": "index_synthetic_data",
    "jobRunId": "xamafwrdxnkb",
    "podId": "driver-index-synthetic-data-xama-xamafwrdxnkb"
  }
}

Authorizations

Authorization
string
header
required

Basic authentication header of the form Basic <encoded-value>, where <encoded-value> is the base64-encoded string username:password.

Path Parameters

id
string
required

The name of the job configuration for which to find job runs.

Response

200 - */*

OK

state
enum<string>

The job run's current status.

Available options:
unknown,
idle,
starting,
running,
finishing,
cancelling,
finished,
cancelled,
error,
skipped
jobId
string

The unique job run ID.

jobConfig
object

The job's configuration details.

hostname
string

The host that ran the job.

result
object
startTime
string<date-time>

The job's start time, shown if the job has finished.

endTime
string<date-time>

The job's end time, shown if the job has finished.

duration
integer<int64>

The job's total run time, shown if the job has finished.