Jobs
Fusion provides the ability to run jobs against your data collections.
To create or configure the jobs detailed in reference topics in this section, sign in to Fusion and click Collections > Jobs. Then click Add+ to create a new job or select an existing job you want to configure.
The jobs described in this section have a subtype
property with one of these values:
-
"task". Jobs of this type include the following:
-
REST HTTP calls that run REST/HTTP commands
-
Log cleanup job that deletes log messages from the system logs collection
-
-
"spark". Jobs of this type process data and include the following:
-
SQL Aggregation job that injects user-defined parameters into a SQL template
-
Custom Python job that runs Python code using Fusion
-
Custom Spark job that runs a custom JAR file
-
Script that runs a custom Scala script using Fusion
-
These additional jobs also have reference topics in this section:
-
Supervised classification jobs such as Build Training Data and Classification
-
Recommendation Jobs such as BPR Recommender and Content based Recommender
-
Cluster Labeling and Document Clustering jobs
Jobs with the a subtype of datasource have configuration schemas that depend on the connector type. For more information, see Connectors Configuration Reference. You cannot create, run, or schedule datasource subtype jobs in the Collections > Jobs screen.
|
For conceptual information and instructions about configuring and scheduling jobs, see Jobs and Schedules.
Lucidworks offers free training to help you get started with Fusion. Check out the Managing and Scheduling Jobs quick learning, which focuses on how to create, configure, and schedule jobs using the Fusion UI: Visit the LucidAcademy to see the full training catalog. |