Search mode
When creating a query pipeline, you can select a search mode for the pipeline to use.dsl(Domain Specific Language) uses expressive search queries and responses via a structured, modern JSON format.legacyprimarily uses Solr parameters. See the Solr Query Language cheat sheet.alluses both DSL and legacy search modes. This is the default value.
The default value of
all works well for most use cases.Default query pipelines
Fusion creates a default query pipeline when you create an app. The query pipeline has the same name as the app. The default query pipeline has the following pre-configured stages:-
Text Tagger
This stage uses the SolrTextTagger handler to identify known entities in the query by searching the
COLLECTION_NAME_query_rewritecollection. See Manage Collections in the Fusion UI for more information.
Manage Collections in the Fusion UI
Manage Collections in the Fusion UI
Collections can be created or removed using the Fusion UI or the REST API.For information about using the REST API to manage collections, see Collections API in the REST API Reference:You can map a Fusion collection to multiple Solr collections, known here as partitions, where each partition contains data from a specific time range.To configure time-based partitioning, under Time Series Partitioning click Enable.See Time-Based Partitioning for more information.To stop a datasource immediately, choose Abort instead of Stop.There is also a REST API for datasources.
Creating a Collection
When you create an app, by default Fusion Server creates a collection and associated objects.To create a new collection in the Fusion UI:- From within an app, click Collections > Collections Manager.
- At the upper right of the panel, click New.
- Enter a Collection name. This name cannot be changed later.
- To create the collection in the default Solr cluster and with other default settings, click Save Collection.
Creating a Collection with Advanced Options
To access advanced options for creating a collection in the Fusion UI:- From within an app, click Collections > Collections Manager.
- At the upper right of the panel, click New.
- Enter a Collection name. This name cannot be changed later.
- Click Advanced.
- Configure advanced options. The options are described below.
- Click Save Collection.
Solr Cluster
By default, a new collection is associated with the Solr instance that is associated with thedefault Solr cluster.If Fusion has multiple Solr clusters, choose from the list which cluster you want to associate your collection with.
The cluster must exist first.Solr Cluster Layout
The next section lets you define a Replication Factor and Number of Shards. Define these options only if you are creating a new collection in the Solr cluster. If you are linking Fusion to an existing Solr collection, you can skip these settings.Solr Collection Import
Import a Solr collection to associate the new Fusion collection with an existing Solr collection. Enter a Solr Collection Name to associate the collection with an existing Solr collection. Then, enter a Solr Config Set to tell ZooKeeper to use the configurations from an existing collection in Solr when creating this collection.Time Series Partitioning
Available in 4.x only.
Configuring Collections
The Collections menu lets you configure your existing collection, including datasources, fields, jobs, stopwords, and synonyms.In the Fusion UI, from any app, the Collections icon displays on the left side of the screen.Some tasks related to managing a collection are available in other menus:- Configure a profile in Indexing > Indexing Profiles or Querying > Query Profiles.
- View reports about your collection’s activity in Analytics > Dashboards.
Collections Manager
The Collections Manager page displays details about the collection, such as how many datasources are configured, how many documents are in the index, and how much disk space the index consumes.This page also lets you create a new collection, disable search logs or signals, enable recommendations, issue a commit command to Solr, or clear a collection.Disable search logs
When you first create a collection, the search logs are created by default. The search logs populate the panels in Analytics > Dashboards.- Hover over your collection name until the gear icon appears at the end of the line.
- Click the gear icon.
- Click Disable Search Logs.
- On the confirmation screen, click Disable Search Logs.
- Fusion 5.x. Dashboards
- Fusion 4.x. Dashboards
Disable signals
When you first create a collection, the signals and aggregated signals collections are created by default.- Hover over your collection name until the gear icon appears at the end of the line.
- Click the gear icon.
- Click Disable Signals.
- On the confirmation screen, click Disable Signals.
Hard commit a collection
- Hover over your collection name until the gear icon appears at the end of the line.
- Click the gear icon.
- Click Hard Commit Collection.
- On the confirmation screen, click Hard Commit Collection.
Datasources
To access the Datasources page, click Indexing > Datasources. By default, there are no datasources configured right after installation.To add a new datasource, click New at the upper right of the panel.See the Connectors and Datasources Reference for details on how to configure a datasource. Options vary depending on the repository you would like to index.After you configure a datasource, it appears in a list on this screen. Click the name of a datasource to edit its properties. Click Start to start the datasource. Click Stop to stop the datasource before it completes. To the right, view information on the last completed job, including the date and time started and stopped, and the number of documents found as new, skipped, or failed.When you stop a datasource, Fusion attempts to safely close connector threads, finishing processing documents through the pipeline and indexing documents to Solr. Some connectors take longer to complete these processes than others, so might stay in a “stopping” state for several minutes.
Stopwords
The Stopwords page lets you edit a stopwords list for your collection.To add or delete stop words:- Click the name of the text file you wish to edit.
- Add a new word on a new line.
- When you are done with your changes, click Save.
- Click System > Import Fusion Objects.
- Choose the file to upload.
- Click Import >>.
Synonyms
Fusion has the same synonym functionality that Solr supports. This includes a list of words that are synonyms (where the synonym list expands on the terms entered by the user), as well as a full mapping of words, where a word is substituted for what the user has entered (that is, the term the user has entered is replaced by a term in the synonym list).See more about synonyms:You can edit the synonyms list for your collection.To access the Synonyms page in the Fusion UI, in any app, click Collections > Synonyms.Filter the list of synonym definitions by typing in the Filter… box.To import a synonyms list:- From the Synonyms page, click Import and Save. A dialog box opens.
- Choose the file to import.
- Enter new synonym definitions one per line.
- To enter a string of terms that expand on the terms the user entered, enter the terms separated by commas, like
Television, TV. - To enter a term that should be mapped to another term, enter the terms separated by an equal sign then a right angle bracket,
=>, likei-pod=>ipod.
- To enter a string of terms that expand on the terms the user entered, enter the terms separated by commas, like
- Remove a line by clicking the x at the end of the line.
- Once you are finished with edits, click Save.
Profiles
Profiles allow you to create an alias for an index or query pipeline. This allows you to send documents or queries to a consistent endpoint and change the underlying pipeline or collection as needed.Read about profiles in Index Profiles and Query Profiles:- Fusion 5.x.
- Fusion 4.x.
Learn more
Collections Menu Tour
The quick learning for Collections Menu Tour focuses on the Collections Menu features and functionality along with a brief description of each screen available in the menu.
For Fusion 5.x.x organizations that do not have a Predictive Merchandiser license, the Solr Text Tagger handler also searches the
COLLECTION_NAME_query_rewrite_staging collection in the case of the Fusion query rewriting Simulator.- Boost with Signals
The Boost with Signals query pipeline stage uses aggregated signals to selectively boost items in the set of search results. - Query Fields
The Query Fields query pipeline stage defines common Solr query parameters for the edismax query parser. If using a less-than sign (<) with DisMax, it must be escaped using a backslash.
An alternative to this stage is the Additional Query Parameters stage. - Field Facet
The Field Facet query pipeline stage is used to add a Solr Field Facet query to the search query pipeline. - Apply Rules
This stage looks up rules that have been deployed to theCOLLECTION_NAME_query_rewritecollection and matches them against the query. Matching rules that perform query rewriting are applied at this stage, while matching rules that perform response rewriting are applied by the Modify Response with Rules stage later in the pipeline.
To trigger a rule that contains a tag, specify the tagname in the request URL of the user search app. See Easily define triggers in tags for more information.
- A Solr Query
The Solr Query stage transforms the Fusion query pipeline Request object into a Solr query and sends it to Solr. - Modify Response with Rules
Most rules operate on the request, but some rule types, such as banner rules or redirect rules, do their work when the response comes back. The Modify Response with Rules stage applies those rules to the response. For example, a banner rule can add a banner URL to the response before returning it to the client.
Custom query pipelines
Using the Query Workbench or the REST API, you can develop custom pipelines to suit any search application. Start with any of Fusion’s built-in query pipelines, then add, remove, and re-order the pipeline stages as needed to produce the appropriate query results.Asynchronous query pipeline processing
Query pipeline processing performance can be improved by enabling asynchronous processing for certain stages that make requests to secondary collections, external databases, and so on. The following stages support asynchronous processing:- Active Directory Security Trimming
- Apply Rules
- Boost with Signals
- JDBC Lookup
- LWAI Prediction
- LWAI Vectorize Query
- Security Trimming
- Solr Subquery
Monitoring for asynchronous query pipeline stages
The monitoring feature provides a framework that integrates with tools such as Grafana for seamless observability, and supports complex AI-driven search and generative AI (Gen-AI) workloads. Monitoring provides critical, real-time visibility into the performance and reliability of asynchronous query pipeline stages. The information includes tracking execution times, failures, and performance bottlenecks. In addition, monitoring information enables system optimization and the ability to more quickly and easily troubleshoot issues. Use these execution metrics and failure analysis to reduce downtime and accelerate issue resolution, as well as optimize search and AI-driven applications for efficiency and scalability. This ensures better search relevance, improved operational stability, and readiness for next-generation AI search experiences.Debugging
If your pipeline is not producing the desired result, these debugging tips can help you identify the source of the issue and resolve it.View parameters
When debugging a pipeline, it helps to see the parameters that are being passed to or from each stage. There are several ways to exposed those parameters:Add a Logging stage to view parameters
Add a Logging stage to view parameters
Follow these steps to add a Logging stage to your pipeline:
- In Fusion UI, navigate to Indexing > Index Pipelines (for index pipelines) or Querying > Query Pipelines (for query pipelines).
- Click Add a new pipeline stage.
- Select Logging from the Troubleshooting section.
- In the Label field, enter a descriptive name (for example, “Debug After Vectorize”).
- Set the detailed property to
trueto print the full Request or PipelineDocument object. - Place the Logging stage after the stage you want to debug.
- Click Save.
- Run your pipeline and check the appropriate log file for your pipeline type:
- Query pipelines:
https://FUSION_HOST/var/log/api/api.log - Index pipelines:
https://FUSION_HOST/var/log/api/fusion-indexing.log
- Query pipelines:
Use JavaScript to inspect context variables
Use JavaScript to inspect context variables
Follow these steps to inspect context variables:
- Add a JavaScript stage to your pipeline.
- Use the
ctxvariable to inspect context data: - Check logs at
https://FUSION_HOST/var/log/api/api.log.
Use debug info in Query Workbench
Use debug info in Query Workbench
Follow these steps to debug using Query Workbench:
- Navigate to Querying > Query Workbench.
- Enter a test query and click Search.
- Click the Debug tab. The debug view displays the following information:
- Request parameters
- Pipeline stage execution details
- Response data including
responseHeaderanddebug.explain
- Switch to View As: JSON to see the full response structure.
Enable Fail on Error
The Fail on Error setting determines whether silent failures can occur in your pipeline. By enabling Fail on Error during development, testing, and troubleshooting, you ensure that configuration issues, authentication problems, or model errors are immediately visible rather than producing incomplete or incorrect data that can be difficult to troubleshoot later.Configure the Fail on Error setting in LWAI stages
Configure the Fail on Error setting in LWAI stages
The Fail on Error setting controls whether pipeline execution stops when an LWAI stage encounters an error. Follow these steps to configure this setting:
- In Fusion UI, navigate to your pipeline (Index or Query).
- Click the LWAI stage you want to configure (for example, LWAI Vectorize Field or LWAI Vectorize Query).
- Locate the Fail on Error checkbox at the bottom of the stage configuration.
- Select the checkbox to enable any of the following behaviors:
- Stop pipeline processing and throw an exception on errors
- Get immediate feedback when LWAI models fail or are misconfigured
- Guarantee data quality by preventing indexing of documents without vectors
In production environments, keep this feature disabled to ensure that service is not interrupted when errors occur.
- Click Save.
Test the Fail on Error configuration
Test the Fail on Error configuration
Follow these steps to verify your Fail on Error configuration:
- Trigger an intentional error (for example, use an invalid model name or account).
- Verify that the pipeline fails (when Fail on Error is enabled) or continues (when Fail on Error is disabled).
- Review logs at
https://FUSION_HOST/var/log/api/api.logfor error messages.