Skip to main content
Released on December 10, 2024, this maintenance release includes new generative AI features and some bug fixes. To learn more, skip to the release notes.

Platform Support and Component Versions

Kubernetes platform support

Lucidworks has tested and validated support for the following Kubernetes platform and versions:
  • Google Kubernetes Engine (GKE): 1.28, 1.29, 1.30
For more information on Kubernetes version support, see the Kubernetes support policy.

Component versions

The following table details the versions of key components that may be critical to deployments and upgrades.
ComponentVersion
Solrfusion-solr 5.9.7 (based on Solr 9.6.1)
ZooKeeper3.9.1
Spark3.2.2
Ingress ControllersNginx, Ambassador (Envoy), GKE Ingress Controller Istio not supported.
More information about support dates can be found at Lucidworks Fusion Product Lifecycle.

New Features

Generative AI predictions with Lucidworks AI

New Lucidworks AI pipeline stages are introduced in this release to enrich your index and search with Generative AI predictions:
  • The LWAI Prediction index stage allows for asynchronous and synchronous enrichment that add predictions when indexing your data.
    See Configure the LWAI Prediction index Stage for more detailed instructions about configuring this stage.
  • The LWAI Prediction query stage fetches synchronous predictions to add to your query response. Configure the LWAI Prediction query stage explains in detail how to configure this stage.
The LWAI Prediction index stage is a Fusion index pipeline stage that enriches your index with Generative AI predictions. It defaults to asynchronous processing, which does not block the pipeline while waiting for a response from Lucidworks AI.For reference information, see LWAI Prediction index stage.To use this stage, non-admin Fusion users must be granted the PUT,POST,GET:/LWAI-ACCOUNT-NAME/** permission in Fusion, which is the Lucidworks AI API Account Name defined in Lucidworks AI Gateway when this stage is configured.To configure this stage:
  1. Sign in to Fusion and click Indexing > Index Pipelines.
  2. Click Add+ to add a new pipeline.
  3. Enter the name in Pipeline ID.
  4. Click Add a new pipeline stage.
  5. In the AI section, click LWAI Prediction.
  6. In the Label field, enter a unique identifier for this stage.
  7. In the Condition field, enter a script that results in true or false, which determines if the stage should process.
  8. In the Account Name field, select the Lucidworks AI API account name defined in Lucidworks AI Gateway.
  9. In the Use Case field, select the Lucidworks AI use case to associate with this stage.
    • To generate a list of the use cases for your organization, see Use Case API.
    • If the Call Asynchronously? check box is selected, see available use cases described in Async Prediction API.
    • If the Call Asynchronously? check box is not selected, see available use cases described in Prediction API.
  10. In the Model field, select the Lucidworks AI model to associate with this stage.
If you do not see any model names and you are a non-admin Fusion user, verify with a Fusion administrator that your user account has these permissions: PUT,POST,GET:/LWAI-ACCOUNT-NAME/**Your Fusion account name must match the name of the account that you selected in the Account Name dropdown.For more information about models, see:
  1. In the Input context variable variable field, enter the name of the variable in context to be used as input. Template expressions are supported.
  2. In the Destination field name and context output field, enter the name that will be used as both the field name in the document where the prediction is written and the context variable that contains the prediction.
  • If the Call Asynchronously? check box is selected and a value is entered in this field:
    • {destination name}_t is the full response.
    • In the document:
      • _lw_ai_properties_ss contains the Lucidworks account, boolean setting for async, use case, input for the call, and the collection.
      • _lw_ai_request_count is the number of GET requests by predictionId and _lw_ai_success_count is the number of responses without errors. These two fields are used for debugging only. Based on the deployment, the most useful measure is the ratio of _lw_ai_success_count divided by `_lw_ai_request_count and then adjusting as much as possible to achieve 1.0.
      • enriched_ss contains the use case. This can be used as a boolean value to verify if the use case indexed successfully.
  • If the Call Asynchronously? check box is not selected and a value is entered in this field:
    • {destination name}_t is the full response.
  • If no value is entered in this field (regardless of the Call Asynchronously? check box setting):
    • _lw_ai_{use case}_t is the response.response object, which is the raw model output.
    • _lw_ai_{use case}_response_s is the full response.
  1. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. The useCaseConfig parameter is only applicable to certain use cases.
  • If the Call Asynchronously? check box is selected, useCaseConfig information for each applicable use case is described in Async Prediction API.
  • If the Call Asynchronously? check box is not selected, useCaseConfig information for each applicable use case is described in Prediction API.
  1. In the Model Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases.
  • If the Call Asynchronously? check box is selected, modelConfig information is described in Async Prediction API.
  • If the Call Asynchronously? check box is not selected, modelConfig information is described in Prediction API.
  1. In the API Key field, enter the secret value specified in the external model. For:
  • OpenAI models, "apiKey" is the value in the model’s "[OPENAI_API_KEY]" field. For more information, see Authentication API keys.
  • Azure OpenAI models, "apiKey" is the value generated by Azure in either the model’s "[KEY1 or KEY2]" field. For requirements to use Azure models, see Generative AI models.
  • Google VertexAI models, "apiKey" is the value in the model’s "[BASE64_ENCODED_GOOGLE_SERVICE_ACCOUNT_KEY]" field. For more information, see Create and delete Google service account keys.
  1. To run the API call asynchronously, select the Call Asynchronously? check box to specify the stage is to use the Lucidworks AI Async Prediction API endpoints. If this is selected, the API call does not block the pipeline while waiting for a response from Lucidworks AI.
If the check box is not selected, the API call uses the Prediction API, which uses the pipeline until a response is received from Lucidworks AI. Performance of other API calls can be impacted.
  1. In the Maximum Asynchronous Call Tries field, enter the maximum number of times to send an asynchronous API call before the system generates a failure error.
  2. Select the Fail on Error checkbox to generate an exception if an error occurs while generating a prediction for a document.
  3. Click Save.

Additional requirements

Additional requirements to use async calls include:
  • Use a V2 connector. Only V2 connectors work for this task and not other options, such as PBL or V1 connectors.
  • Remove the Apache Tika stage from your parser because it can cause datasource failures with the following error: “The following components failed: [class com.lucidworks.connectors.service.components.job.processor.DefaultDataProcessor : Only Tika Container parser can support Async Parsing.]”
  • Replace the Solr Indexer stage with the Solr Partial Update Indexer stage with the following settings:
    • Enable Concurrency Control set to off
    • Reject Update if Solr Document is not Present set to off
    • Process All Pipeline Doc Fields set to on
    • Allow reserved fields set to on
    • A parameter with Update Type, Field Name & Value in Updates
The LWAI Prediction AI query stage is a Fusion pipeline query stage that enriches your search results with Generative AI predictions.For reference information, see LWAI Prediction query stage.To use this stage, non-admin Fusion users must be granted the PUT,POST,GET:/LWAI-ACCOUNT-NAME/** permission in Fusion, which is the Lucidworks AI API Account Name defined in Lucidworks AI Gateway when this stage is configured.To configure this stage:
  1. Sign in to Fusion and click Querying > Query Pipelines.
  2. Click Add+ to add a new pipeline.
  3. Enter the name in Pipeline ID.
  4. Click Add a new pipeline stage.
  5. In the AI section, click LWAI Prediction.
  6. In the Label field, enter a unique identifier for this stage.
  7. In the Condition field, enter a script that results in true or false, which determines if the stage should process.
  8. Select Asynchronous Execution Config if you want to run this stage asynchronously. If this field is enabled, complete the following fields:
    1. Select Enable Async Execution. Fusion automatically assigns an Async ID value to this stage. Change this to a more memorable string that describes the asynchronous stages you are merging, such as signals or access_control.
    2. Copy the Async ID value.
  9. In the Account Name field, select the name of the Lucidworks AI integration defined when the integration was created.
  10. In the Use Case field, select the Lucidworks AI use case to associate with this stage.
  • To generate a list of the use cases for your organization, see Use Case API.
  • The available use cases are described in Prediction API.
  1. In the Model field, select the Lucidworks AI model to associate with this stage.
If you do not see any model names and you are a non-admin Fusion user, verify with a Fusion administrator that your user account has these permissions: PUT,POST,GET:/LWAI-ACCOUNT-NAME/**Your Fusion account name must match the name of the account that you selected in the Account Name dropdown.For more information about models, see:
  1. In the Input context variable variable field, enter the name of the variable in context to be used as input. Template expressions are supported.
  2. In the Destination variable name and context output field, enter the name that will be used as both the query response header in the prediction results and the context variable that contains the prediction.
  • If a value is entered in this field:
    • {destination name}_t is the full response.
    • In the context:
      • _lw_ai_properties_ss contains the Lucidworks account, boolean setting for async, use case, input for the call, and the collection.
  • If no value is entered in this field:
    • _lw_ai_{use case}_t is the response.response object, which is the raw model output.
    • _lw_ai_{use case}_response_s is the full response.
  1. Select the Include Response Documents? check box to include the response documents in the Lucidworks AI request. This option is only available for certain use cases. If this is selected, run the Solr Query stage to ensure documents exist before running the LWAI Prediction query stage.
Response documents must be included in the RAG use case, which supports attaching a maximum of 3 response documents. To prevent errors, enter all of the entries described in the Document Field Mappings section.
  1. In the Document Field Mappings section, enter the LW AI Document field name and its corresponding Response document field name to map from input documents to the fields accepted by the Prediction API RAG use case. The fields are described in the Prediction API.
If information is not entered in this section, the default mappings are used.
  • The body and source fields are required.
    • body - description_t is the contents of the document.
    • source - link_t is the URL/ID of the document.
  • The title and date fields are optional.
    • title - title_t is the title of the document.
    • date - _lw_file_modified_tdt is the creation date of the document in epoch time format.
  1. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI.
  • The useCaseConfig parameter is only applicable to certain use cases. For more information, see the Async Prediction API and the Prediction API.
  • The memoryUuid parameter is required in the Standalone Query Rewriter use case, and is optional in the RAG use case. For more information, see Prediction API.
  1. In the Model Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases. For more information, see Prediction API.
  2. In the API Key field, enter the secret value specified in the external model. For:
  • OpenAI models, "apiKey" is the value in the model’s "[OPENAI_API_KEY]" field. For more information, see Authentication API keys.
  • Azure OpenAI models, "apiKey" is the value generated by Azure in either the model’s "[KEY1 or KEY2]" field. For requirements to use Azure models, see Generative AI models.
  • Google VertexAI models, "apiKey" is the value in the model’s "[BASE64_ENCODED_GOOGLE_SERVICE_ACCOUNT_KEY]" field. For more information, see Create and delete Google service account keys.
  1. Select the Fail on Error checkbox to generate an exception if an error occurs during this stage.
  2. Click Save.
See Generative AI for more details about Lucidworks AI’s Generative AI capabilities, including pre-trained models hosted by Lucidworks. You can also use LWAI custom trained embedding and LWAI hosted pre-trained models with these new stages. A new API endpoint, /index-pipelines/{id}/async-enrichment/skip-pending, can be used to clear the queue of outstanding asynchronous prediction index requests if needed.

Bug fixes

  • Fixed an issue where some collections were not displayed in the Collections Manager if the system contained one or more orphaned child collections.
    Note that orphaned child collections are not displayed in the Collections Manager by design, but they are discoverable using the API or the Object Manager.

Removals

For more information, see Deprecations and Removals.

Bitnami removal

By August 28, 2025, Fusion’s Helm chart will reference internally built open-source images instead of Bitnami images due to changes in how they host images.
I