Parallel Bulk LoaderJob configuration specifications
The Parallel Bulk Loader (PBL) job enables bulk ingestion of structured and semi-structured data from big data systems, NoSQL databases, and common file formats like Parquet and Avro.
Product Selector
The Parallel Bulk Loader (PBL) job enables bulk ingestion of structured and semi-structured data from big data systems, NoSQL databases, and common file formats like Parquet and Avro.
Use this job when you want to load data into Fusion from a SparkSQL compliant datasource, and send this data to any Spark supported datasource (Solr/Index Pipeline/S3/GCS/...).
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.
<= 63 characters
Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?
Spark configuration settings.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Specifies the input data source format; common examples include: parquet, json, textinputformat
Path to load; for data sources that support multiple paths, separate by commas
Stream data from input source to output Solr collection
Specifies the output mode for streaming. E.g., append (default), complete, update
Default: append
Allowed values: appendcompleteupdate
Options passed to the data source to configure the read operation; options differ for every data source so refer to the documentation for more information.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Solr Collection to send the documents loaded from the input data source.
Send the documents loaded from the input data source to an index pipeline instead of going directly to Solr.
Parser to send the documents to while sending to index pipeline. (Defaults to same as index pipeline)
If true, define fields in Solr using the input schema; if a SQL transform is defined, the fields to define are based on the transformed DataFrame schema instead of the input.
Default: true
Send documents to Solr as atomic updates; only applies if sending directly to Solr and not an index pipeline.
Default: false
Name of the field that holds a timestamp for each document; only required if using timestamps to filter new rows from the input source.
If true, delete any documents indexed in Solr by previous runs of this job. Default is false.
Default: false
Partition the input DataFrame into partitions before writing out to Solr or Fusion
Optimize the Solr collection down to the specified number of segments after writing to Solr.
Options used when writing output. For output formats other than solr or index-pipeline, format and path options can be specified here
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Optional Scala script used to transform the results returned by the data source before indexing. You must define your transform script in a method with signature: def transform(inputDF: Dataset[Row]) : Dataset[Row]
The ID of the Spark ML PipelineModel stored in the Fusion blob store.
Optional SQL used to transform the results returned by the data source before indexing. The input DataFrame returned from the data source will be registered as a temp table named '_input'. The Scala transform is applied before the SQL transform if both are provided, which allows you to define custom UDFs in the Scala script for use in your transformation SQL.
Additional options to pass to the Spark shell when running this job.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Bind the key/values to the script interpreter
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
If set to true, when a failure occurs when sending a document through an index pipeline, the job will continue onto the next document instead of failing
Default: false
Default: parallel-bulk-loader
Allowed values: parallel-bulk-loader