Skip to main content
The LWAI Prediction index stage is an integration between Fusion and Lucidworks AI to enrich your index with Generative AI predictions. Generative AI predictions can enrich product descriptions and other metadata that improves search relevance and recommendations. In addition, this stage can return summaries of long documents or clarify descriptions and other data. For example, if you are a B2B organization that provides industrial technology components and information, this stage can enhance product part descriptions, and category classification and tags that improve search relevance. For technical specifications and detailed instructional guides, the stage can summarize that information and enhance discoverability. In a similar fashion, the stage can enrich catalog items and information to improve relevance and customer search experiences for B2C organizations. If you enable the Call Asynchronously? field, the stage processes predictions without blocking the indexing pipeline. Asynchronous predictions are particularly useful for large documents or generative AI models where response times may be longer. Fusion submits the selected document content to Lucidworks AI and continues processing the pipeline while the Lucidworks AI model generates the response.
This behavior follows a processing pattern similar to asynchronous Tika parsing, where documents are indexed first and enrichment results are applied at a different time when system performance allows. Running predictions asynchronously helps maintain indexing throughput when generating AI responses that may take longer to complete.
An example of a simplified asynchronous prediction flow includes the following processes:
  • A document enters the Fusion index pipeline.
  • Fusion processes the document and prepares it for indexing or update.
  • The LWAI Prediction index stage sends the configured input field to Lucidworks AI.
  • Lucidworks AI processes the request and generates the prediction asynchronously.
  • Fusion receives the prediction and writes the result to the configured destination field in the document.

Prerequisites

Lucidworks AI Gateway

Make sure you have configured a Lucidworks AI Gateway integration before you begin. Lucidworks AI Gateway provides a secure, authenticated connection between Fusion and your hosted models.

Permissions

To use this stage, non-admin Fusion users must be granted the PUT,POST,GET:/LWAI-ACCOUNT-NAME/** permission in Fusion, which is the Lucidworks AI API Account Name defined in Lucidworks AI Gateway when this stage is configured.

Configuration

  1. Sign in to Fusion and click Indexing > Index Pipelines.
  2. Click Add+ to add a new pipeline.
  3. Enter the name in Pipeline ID.
  4. Click Add a new pipeline stage.
  5. In the AI section, click LWAI Prediction.
  6. In the Label field, enter a unique identifier for this stage.
  7. In the Condition field, enter a script that results in true or false, which determines if the stage should process.
  8. In the Account Name field, select the Lucidworks AI API account name defined in Lucidworks AI Gateway.
  9. In the Use Case field, select the Lucidworks AI use case to associate with this stage.
    • To generate a list of the use cases for your organization, see Use Case API.
    • If the Call Asynchronously? check box is selected, the index pipeline continues to process documents while the prediction is handled separately. For more information, see available use cases described in Async Prediction API.
    • If the Call Asynchronously? check box is not selected, the index pipeline waits until the prediction response is returned before continuing. For more information, see available use cases described in Prediction API.
  10. In the Model field, select the Lucidworks AI model to associate with this stage.
If you do not see any model names and you are a non-admin Fusion user, verify with a Fusion administrator that your user account has these permissions: PUT,POST,GET:/LWAI-ACCOUNT-NAME/**Your Fusion account name must match the name of the account that you selected in the Account Name dropdown.For more information about models, see:
  1. In the Input context variable variable field, enter the name of the variable in context to be used as input. Template expressions are supported.
  2. In the Destination field name and context output field, enter the name that will be used as both the field name in the document where the prediction is written and the context variable that contains the prediction.
  • If the Call Asynchronously? check box is selected and a value is entered in this field:
    • {destination name}_t is the full response.
    • In the document:
      • _lw_ai_properties_ss contains the Lucidworks account, boolean setting for async, use case, input for the call, and the collection.
      • _lw_ai_request_count is the number of GET requests by predictionId and _lw_ai_success_count is the number of responses without errors.
        Because asynchronous predictions may be retried multiple times (based on the Maximum Asynchronous Call Tries setting), the request count may be greater than one.
        These fields are only used for debugging. When troubleshooting, the most useful measure is the ratio of _lw_ai_success_count divided by _lw_ai_request_count. A value close to 1.0 indicates that prediction requests are completing successfully. Lower ratios may indicate retries, API errors, or connectivity issues with the Lucidworks AI service.
        If _lw_ai_success_count remains 0 while _lw_ai_request_count increases, verify the following configuration is set correctly:
        • Lucidworks AI Gateway configuration
        • API authentication settings
        • Network connectivity between Fusion and the AI service
        • Selected use case and model are valid for asynchronous predictions
      • enriched_ss contains the use case. This can be used as a boolean value to verify if the use case indexed successfully.
  • If the Call Asynchronously? check box is not selected and a value is entered in this field:
    • {destination name}_t is the full response.
  • If no value is entered in this field (regardless of the Call Asynchronously? check box setting):
    • _lw_ai_{use case}_t is the response.response object, which is the raw model output.
    • _lw_ai_{use case}_response_s is the full response.
  1. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. The useCaseConfig parameter is only applicable to certain use cases.
  • If the Call Asynchronously? check box is selected, useCaseConfig information for each applicable use case is described in Async Prediction API.
  • If the Call Asynchronously? check box is not selected, useCaseConfig information for each applicable use case is described in Prediction API.
  1. In the Model Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases.
  • If the Call Asynchronously? check box is selected, modelConfig information is described in Async Prediction API.
  • If the Call Asynchronously? check box is not selected, modelConfig information is described in Prediction API.
  1. In the API Key field, enter the secret value specified in the external model. For:
  • OpenAI models, "apiKey" is the value in the model’s "[OPENAI_API_KEY]" field. For more information, see Authentication API keys.
  • Azure OpenAI models, "apiKey" is the value generated by Azure in either the model’s "[KEY1 or KEY2]" field. For requirements to use Azure models, see Generative AI models.
  • Google VertexAI models, "apiKey" is the value in the model’s "[BASE64_ENCODED_GOOGLE_SERVICE_ACCOUNT_KEY]" field. For more information, see Create and delete Google service account keys.
  1. To run the API call asynchronously, select the Call Asynchronously? check box to specify the stage is to use the Lucidworks AI Async Prediction API endpoints. If this is selected, the API call does not block the pipeline while waiting for a response from Lucidworks AI.
If the check box is not selected, the API call uses the Prediction API, which uses the pipeline until a response is received from Lucidworks AI. Performance of other API calls can be impacted.
  1. In the Maximum Asynchronous Call Tries field, enter the maximum number of times to send an asynchronous API call before the system generates a failure error.
  2. Select the Fail on Error checkbox to generate an exception if an error occurs while generating a prediction for a document.
  3. Click Save.

Additional requirements

Asynchronous enrichment stages update documents after the initial indexing operation. For this reason, the indexing pipeline must support partial document updates.Include the following additional requirements:
  • Use a V2 connector. Only V2 connectors support this workflow.
  • Remove the Apache Tika parser stage if asynchronous parsing is enabled, as it may conflict with asynchronous processing. If the parser stage is not removed, the datasource fails with the following error: “The following components failed: [class com.lucidworks.connectors.service.components.job.processor.DefaultDataProcessor : Only Tika Container parser can support Async Parsing.]”
  • Replace the Solr Indexer stage with the Solr Partial Update Indexer stage with the following settings:
    • Enable Concurrency Control set to off
    • Reject Update if Solr Document is not Present set to off
    • Process All Pipeline Doc Fields set to on
    • Allow reserved fields set to on
    • Configure the required partial update parameters
By default, the Fusion Call Asynchronously? field is selected, which specifies this stage uses the Lucidworks AI Async Prediction API endpoints. If the Fusion Call Asynchronously? field is not selected, this stage uses the LWAI Prediction API endpoints.
When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.