Spark job subtypes
For the Spark job type, the available subtypes are listed below.- SQL Aggregation
 A Spark SQL aggregation job where user-defined parameters are injected into a built-in SQL template at runtime.
- Custom Python
 The Custom Python job provides user the ability to run Python code via Managed Fusion. This job supports Python 3.6+ code.
- Script
 This job lets you run a custom Scala script in Managed Fusion.
See Additional Spark jobs for more information.
Spark job configuration
Spark jobs can be created and modified using the Managed Fusion UI or the Spark Jobs API. They can be scheduled using the Managed Fusion UI or the Spark Jobs API. For the complete list of configuration parameters for all Spark job subtypes, see the Jobs Configuration Reference.Machine learning jobs
Managed Fusion provides these job types to perform machine learning tasks.Signals analysis
These jobs analyze a collection of signals in order to perform query rewriting, signals aggregation, or experiment analysis.- Ground Truth
 Ground truth or gold standard datasets are used in the ground truth jobs and query relevance metrics to define a specific set of documents.
Query rewriting
These jobs produce data that can be used for query rewriting or to inform updates to the synonyms.txt file.- Head/Tail Analysis
 Perform head/tail analysis of queries from collections of raw or aggregated signals, to identify underperforming queries and the reasons. This information is valuable for improving overall conversions, Solr configurations, auto-suggest, product catalogs, and SEO/SEM strategies, in order to improve conversion rates.
- Phrase Extraction
 Identify multi-word phrases in signals.
- Synonym Detection
 Use this job to generate pairs of synonyms and pairs of similar queries. Two words are considered potential synonyms when they are used in a similar context in similar queries.
- Token and Phrase Spell Correction
 Detect misspellings in queries or documents using the numbers of occurrences of words and phrases.
Signals aggregation
- SQL Aggregation
 A Spark SQL aggregation job where user-defined parameters are injected into a built-in SQL template at runtime.
Experiment analysis
- Ranking Metrics
 Use this job to calculate relevance metrics by replaying ground truth queries against catalog data using variants from an experiment. Metrics include Normalized Discounted Cumulative Gain (nDCG) and others.
Collaborative recommenders
These jobs analyze signals and generate matrices used to provide collaborative recommendations.- BPR Recommender
 Use this job when you want to compute user recommendations or item similarities using a Bayesian Personalized Ranking (BPR) recommender algorithm.
- Query-to-Query Session-Based Similarity
 This recommender is based on co-occurrence of queries in the context of clicked documents and sessions. It is useful when your data shows that users tend to search for similar items in a single search session. This method of generating query-to-query recommendations is faster and more reliable than the Query-to-Query Similarity recommender job, and is session-based unlike the similar queries previously generated as part of the Synonym Detection job.
Content-based recommenders
Content-based recommenders create matrices of similar items based on their content.- Content-Based Recommender
 Use this job when you want to compute item similarities based on their content, such as product descriptions.
Content analysis
- 
Cluster Labeling
 Cluster labeling jobs are run against your data collections, and are used:- When clusters or well-defined document categories already exist
- When you want to discover and attach keywords to see representative words within existing clusters
 
- 
Document Clustering
 The Document Clustering job uses an unsupervised machine learning algorithm to group documents into clusters based on similarities in their content. You can enable more efficient document exploration by using these clusters as facets, high-level summaries or themes, or to recommend other documents from the same cluster. The job can automatically group similar documents in all kinds of content, such as clinical trials, legal documents, book reviews, blogs, scientific papers, and products.
- 
Classification
 This job analyzes how your existing documents are categorized and produces a classification model that can be used to predict the categories of new documents at index time.
- 
Outlier Detection
 Outlier detection jobs are run against your data collections, and also perform the following actions:- Identify information that significantly differs from other data in the collection
- Attach labels to designate each outlier group
 
Data ingest
The Parallel Bulk Loader (PBL) job enables bulk ingestion of structured and semi-structured data from big data systems, NoSQL databases, and common file formats like Parquet and Avro. Datasources the PBL uses include not only common file formats, but Solr databases, JDBC-compliant databases, MongoDB databases and more. In addition, the PBL distributes the load across the Managed Fusion Spark cluster to optimize performance. And because no parsing is needed, indexing performance is also maximized by writing directly to Solr. For more information about usage and detailed configuration, see Parallel Bulk Loader configuration reference.Learn more
Get Logs for a Spark Job
Get Logs for a Spark Job
See the table below for useful commands related to Spark jobs:
You can then access the Spark UI on 
| Description | Command | 
|---|---|
| Retrieve the initial logs that contain information about the pod spin up. | curl -X GET -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/spark/driver/log/JOB_ID | 
| Retrieve the pod ID. | k get pods -l spark-role=driver -l jobConfigId=JOB_ID | 
| Retrieve logs from failed jobs. | kubectl logs DRIVER_POD_NAME | 
| Tail logs from running containers by using the -fparameter. | kubectl logs -f POD_NAME | 
Spark deletes failed and successful executor pods. Fusion provides a cleanup Kubernetes cron job that removes successfully completed driver pods every 15 minutes.
Viewing the Spark UI
In the event that you need to monitor or inspect your Spark job executions, you can use port forwarding to access the Spark UI in your web browser. Port forwarding forwards your local port connection to the port of the pod that is running the Spark driver.To view the Spark UI, find the pod that is running the Spark driver and run the following command:localhost:4040For related topics, see Spark Operations.