Skip to main content
GET
/
spark
/
configurations
/
{id}
Get a job
import requests

url = "https://{FUSION HOST}/api/spark/configurations/{id}"

headers = {"Authorization": "Basic <encoded-value>"}

response = requests.get(url, headers=headers)

print(response.text)
{
  "type": "parallel-bulk-loader",
  "id": "index_synthetic_data",
  "format": "json",
  "path": "gs://lucidworks-example-data/hardware/1000/1000_synthetic.json",
  "outputCollection": "Synthetic_dataset_test",
  "outputIndexPipeline": "Synthetic_dataset_test",
  "clearDatasource": false,
  "defineFieldsUsingInputSchema": true,
  "atomicUpdates": false,
  "cacheAfterRead": false,
  "continueAfterFailure": false,
  "updates": [
    {
      "userId": "docs",
      "timestamp": "2025-08-14T20:50:09.847634657Z"
    }
  ]
}

Authorizations

Authorization
string
header
required

Basic authentication header of the form Basic <encoded-value>, where <encoded-value> is the base64-encoded string username:password.

Path Parameters

id
string
required

Response

200 - */*

OK

The job's configuration details.

id
string

The name of the job configuration.

Example:

"scripted_job_example"

sparkConfig
object

The job's configuration details. The configuration keys depend on the type of job. Use /spark/schema to see the configuration schemas for all job types.

Example:
{
"spark.cores.max": 2,
"spark.executor.memory": "1g"
}
type
string

The job type.