For supported Kubernetes versions and key component versions, see Platform support and component versions.
Key highlights
KNN performance enhancements
The Neural Hybrid Query stage now uses K-Nearest Neighbors (KNN) instead of vector similarity (vecSim). KNN is a more efficient method for finding the most relevant results, leading to faster and more accurate searches that help users find the most relevant information faster. New configuration options give you finer control over the speed and accuracy of neural hybrid queries:vectorDepth
(Number of Vector Results) sets the number of vector results to return from the vector portion of the hybrid query. IncreasingvectorDepth
retrieves more vector results but may increase query time. Lowering it speeds up search but may reduce result diversity.vecPreFilterBoolean
(Block pre-filtering) indicates whether to prevent pre-filtering. Pre-filtering can improve performance, while blocking it can yield more accurate facet counts and search results.
Java 17 for connectors
Connectors have been updated to use Java 17. If you use remote connectors, you must upgrade to JVM 17. See Configure Remote V2 Connectors for more information.Configure Remote V2 Connectors
Configure Remote V2 Connectors
If you need to index data from behind a firewall, you can configure a V2 connector to run remotely on-premises using TLS-enabled gRPC.The
Remote V2 Connectors are not available by default. Contact your Lucidworks representative for more information about enabling them in your Managed Fusion deployment.
Prerequisites
Before you can set up an on-prem V2 connector, you must configure the egress from your network to allow HTTP/2 communication into the Fusion cloud. You can use a forward proxy server to act as an intermediary between the connector and Fusion.The following is required to run V2 connectors remotely:- The plugin zip file and the connector-plugin-standalone JAR.
- A configured connector backend gRPC endpoint.
- Username and password of a user with a
remote-connectors
oradmin
role. This step is performed by Lucidworks. - If the host where the remote connector is running is not configured to trust the server’s TLS certificate, Lucidworks must help configure the file path of the trust certificate collection.
If your version of Fusion doesn’t have the
remote-connectors
role by default, Lucidworks can create one. No API or UI permissions are required for the role.Connector compatibility
Only V2 connectors are able to run remotely on-premises.The gRPC connector backend is not supported in Fusion environments deployed on AWS.System requirements
The following is required for the on-prem host of the remote connector:- (Managed Fusion 5.9.0-5.9.10) JVM version 11
- (Managed Fusion 5.9.11) JVM version 17
- Minimum of 2 CPUs
- 4GB Memory
Enable backend ingress
NOTE: Contact Lucidworks support to complete this step.In yourrpc-service/values.yaml
file, configure this section as needed:-
Set
enabled
totrue
to enable the backend ingress. -
Set
pathtype
toPrefix
orExact
. -
Set
path
to the path where the backend will be available. -
Set
host
to the host where the backend will be available. -
In Fusion 5.9.6 only, you can set
ingressClassName
to one of the following:nginx
for Nginx Ingress Controlleralb
for AWS Application Load Balancer (ALB)
-
Configure TLS and certificates according to your CA’s procedures and policies.
TLS must be enabled in order to use AWS ALB for ingress.
Connector configuration example
Minimal example
Logback XML configuration file example
Run the remote connector
logging.config
property is optional. If not set, logging messages are sent to the console.Test communication
You can run the connector in communication testing mode. This mode tests the communication with the backend without running the plugin, reports the result, and exits.Encryption
In a deployment, communication to the connector’s backend server is encrypted using TLS. You should only run this configuration without TLS in a testing scenario. To disable TLS, setplain-text
to true
.Egress and proxy server configuration
One of the methods you can use to allow outbound communication from behind a firewall is a proxy server. You can configure a proxy server to allow certain communication traffic while blocking unauthorized communication. If you use a proxy server at the site where the connector is running, you must configure the following properties:- Host. The hosts where the proxy server is running.
- Port. The port the proxy server is listening to for communication requests.
- Credentials. Optional proxy server user and password.
Password encryption
If you use a login name and password in your configuration, run the following utility to encrypt the password:- Enter a user name and password in the connector configuration YAML.
-
Run the standalone JAR with this property:
- Retrieve the encrypted passwords from the log that is created.
- Replace the clear password in the configuration YAML with the encrypted password.
Connector restart (5.7 and earlier)
The connector will shut down automatically whenever the connection to the server is disrupted, to prevent it from getting into a bad state. Communication disruption can happen, for example, when the server running in theconnectors-backend
pod shuts down and is replaced by a new pod. Once the connector shuts down, connector configuration and job execution are disabled. To prevent that from happening, you should restart the connector as soon as possible.You can use Linux scripts and utilities to restart the connector automatically, such as Monit.Recoverable bridge (5.8 and later)
If communication to the remote connector is disrupted, the connector will try to recover communication and gRPC calls. By default, six attempts will be made to recover each gRPC call. The number of attempts can be configured with themax-grpc-retries
bridge parameters.Job expiration duration (5.9.5 only)
The timeout value for irresponsive backend jobs can be configured with thejob-expiration-duration-seconds
parameter. The default value is 120
seconds.Use the remote connector
Once the connector is running, it is available in the Datasources dropdown. If the standalone connector terminates, it disappears from the list of available connectors. Once it is re-run, it is available again and configured connector instances will not get lost.Enable asynchronous parsing (5.9 and later)
To separate document crawling from document parsing, enable Tika Asynchronous Parsing on remote V2 connectors.Asynchronous parsing service
An asynchronous parsing service for connectors has been added. While traditional synchronous parsing can create delays in indexing when handling large documents or high data volumes, asynchronous parsing processes files in the background, allowing indexing to continue without waiting for each document to be fully parsed. This new service brings more efficient data processing, improved search freshness, scalability without added complexity, and better support for diverse data types, supporting HTML, JSON, and other formats. A new parsing stage, Apache Tika Container, has been added. This stage is now required when using asynchronous parsing for connectors. To use asynchronous parsing for connectors, be sure that Async Parsing is checked in the datasource and the Apache Tika Container parser stage is enabled in the index pipeline. See Use Tika asynchronous parsing for detailed steps to set up asynchronous parsing.Use Tika asynchronous parsing
Use Tika asynchronous parsing
This document describes how to set up your application to use Tika asynchronous parsing.Unlike synchronous Tika parsing, which uses a parser stage, asynchronous Tika parsing is configured in the datasource and index pipeline. For more information, see Asynchronous Tika Parsing.
Field names change with asynchronous Tika parsing. In contrast to synchronous parsing, asynchronous Tika parsing prepends
parser_
to fields added to a document. System fields, which start with \_lw_
, are not prepended with parser_
. If you are migrating to asynchronous Tika parsing, and your search application configuration relies on specific field names, update your search application to use the new fields.Configure the connectors datasource
- Navigate to your datasource.
- Enable the Advanced view.
-
Enable the Async Parsing option.
Managed Fusion 5.9.11 and later uses your parser configuration when using asynchronous parsing.The asynchronous parsing service performs Tika parsing using Apache Tika Server. In Managed Fusion 5.8 through 5.9.10, other parsers, such as HTML and JSON, are not supported by the asynchronous parsing service. By enabling asynchronous parsing, the parser configuration linked to your datasource is ignored. In Managed Fusion 5.9.11 and later, other parsers, such as HTML and JSON, are supported by the asynchronous parsing service. By enabling asynchronous parsing, the parser configuration linked to your datasource is used.
- Save the datasource configuration.
Configure the parser stage
You must do this step in Managed Fusion 5.9.11 and later.
- Navigate to Parsers.
- Select the parser, or create a new parser.
- From the Add a parser stage menu, select Apache Tika Container Parser.
- (Optional) Enter a label for this stage. This label changes the names from Apache Tika Container Parser to the value you enter in this field.
- If the Apache Tika Container Parser stage is not already the first stage, drag and drop the stage to the top of the stage list so it is the first stage that runs.
Configure the index pipeline
- Go to the Index Pipeline screen.
- Add the Solr Partial Update Indexer stage.
-
Turn off the Reject Update if Solr Document is not Present option and turn on the Process All Pipeline Doc Fields option:
-
Include an extra update field in the stage configuration using any update type and field name. In this example, an incremental field
docs_counter_i
with an increment value of1
is added: -
Enable the Allow reserved fields option:
- Click Save.
-
Turn off or remove the Solr Indexer stage, and move the Solr Partial Update Indexer stage to be the last stage in the pipeline.
Other parsers, such as HTML and JSON, are now supported by the asynchronous parsing service. By enabling asynchronous parsing, the parser configuration linked to your datasource is used.
This update also includes a new Async Parsing API.
LucidAcademyLucidworks offers free training to help you get started.The Course for Asynchronous Parsing Service focuses on how to use asynchronous parsing to index your data more efficiently:Visit the LucidAcademy to see the full training catalog.
Job Config service
The new Job Config service provides more accurate and reliable job status reporting. This service uses asynchronous communication through Kafka for improved efficiency over the previous synchronous calls, with these benefits:- More accurate job tracking. You get real-time, reliable job status updates, reducing uncertainty about whether a job is running, completed, or failed.
- Faster troubleshooting. With detailed job histories and improved reporting, teams can quickly diagnose issues instead of chasing down incomplete or outdated job statuses.
- Seamless transition. For most users, no action is required - API calls through
api-gateway
automatically reroute. Internal API users need only a simple update to continue tracking jobs accurately.
admin
service to the new job-config
service:
api-gateway
service, you do not need to make any changes; the endpoints above are automatically rerouted to the new job-config
service.
See Job Config API for reference information about these endpoints.
If you are making internal API calls to the
admin
service using any of the endpoints above, you must update your API calls to point to the new job-config
service.Security updates
Lucidworks remains committed to providing a secure and resilient platform. Managed Fusion 5.9.11 includes critical security updates across a number of Managed Fusion services, including the admin, connectors, distributed compute, indexing, job configuration, and query services, ensuring continued protection and reliability for your deployments.API path changes
Trailing slashes are no longer supported when making API calls. For example, this API call works:Known issues
-
Saving large pipelines during high traffic may trigger service instability.
In some environments, saving large query pipelines while handling high traffic loads can cause the Query service to crash with OOM errors due to thread contention.
Managed Fusion 5.9.14 resolves this issue.
If you’re impacted and not yet on this version, contact Lucidworks Support for mitigation options. -
Scheduled jobs that perform Solr operations may fail to run or save due to permission-related issues in the
job-config
service.
In some cases, users without full permissions can create or modify scheduled tasks that execute actions they aren’t authorized to run directly, specifically tasks that invoked Solr usingsolr://
URIs.
These jobs may fail silently or prevent schedule changes from being saved. This issue can also affect the execution of jobs after upgrade. It is fixed in Fusion 5.9.13. -
Config Sync may remove job schedules during upgrade.
During upgrades to Managed Fusion 5.9.11, ConfigSync may remove scheduled job entries from the cluster but fail to reapply them afterward. This can result in missing or disabled job schedules depending on the environment. This issue is fixed in Managed Fusion 5.9.12. -
Jobs API returns incorrect timestamps for jobs scheduled after 12:00 UTC.
In Managed Fusion 5.9.11, job timestamps returned by the API may be off by 12 hours if the time is after noon UTC. This affects scheduling accuracy and may cause the UI to misinterpret whether a change has been made. This issue is fixed in Managed Fusion 5.9.12. -
Cannot update existing job triggers in the Schedulers view.
In Managed Fusion 5.9.11, scheduled job triggers cannot be modified due to incorrect timestamp comparison logic in the Admin UI. You must delete and recreate the trigger instead. This issue is fixed in Managed Fusion 5.9.12. -
Scheduler may stop working if ZooKeeper becomes briefly unavailable.
If alljob-config
pods lose connection to ZooKeeper in Managed Fusion 5.9.11, they may fail to re-elect a leader, halting all scheduling until one of the pods is manually restarted. This issue is fixed in Managed Fusion 5.9.12. -
Datasource jobs show incorrect “started by” user.
In Managed Fusion 5.9.11, datasource jobs started in the UI incorrectly showdefault-subject
as the initiating user in job history instead of the actual user. This issue is fixed in Managed Fusion 5.9.12. -
Schema API incompatibility with nonstandard schema file names.
In Managed Fusion 5.9.11, new apps save Solr schemas with the namemanaged-schema.xml
instead ofmanaged-schema
, while older apps may include only amanaged-schema
file.
This mismatch can cause errors when editing or previewing schemas using the Schema API, particularly when the expected file extension is missing.
For existing config sets, it’s best to copy the contents from the old filename to the new filename, then delete the old one. Managed Fusion 5.9.12 resolves this by supporting both file names, ensuring backward compatibility with older apps and consistent behavior for new ones. -
Scheduled jobs configured to trigger based on the success or failure of another job do not execute as expected. The root cause is that the completion event of a job is not triggering the
onJobEventReceived
handler. This issue is fixed in Managed Fusion 5.9.12. -
Aborted jobs in Managed Fusion are listed twice in the Job History.
This issue is fixed in Managed Fusion 5.9.12. -
When a new datasource is configured in Indexing > Index Workbench, the simulated results do not display, and the following error is generated: “Failed to simulate results from a working pipeline.” As a result, the Index Workbench cannot simulate results for new datasources, preventing users from configuring the indexing process within the workbench.
This issue is fixed in Managed Fusion 5.9.12.
To work around this issue, configure each part of the indexing process separately instead of using the Index Workbench:- Configure datasources in Indexing > Datasources.
- Configure parsers in Indexing > Parsers.
- Configure index pipelines in Indexing > Index Pipelines.
- In Managed Fusion 5.9.10, the Apache Spark 3.4.1 upgrade impacted jobs that use Python 3.7 behavior or compatibility, which may have automatically updated to Python 3.10.x and no longer function correctly. If you have not yet updated the code for Python 3.10.x, update your code to ensure compatibility with Python 3.10.x and then test your Spark jobs in a staging environment before deploying to production.
Deprecations
For full details on deprecations, see Deprecations and Removals.-
The Parsers Indexing CRUD API, which provides CRUD operations for parsers, allowing users to create, read, update, and delete parsers, is deprecated in Managed Fusion 5.9.11. This feature was originally deprecated in Managed Fusion 5.12.0. It will be removed in a future release no later than September 4, 2025.
The Async Parsing API replaces the Parsers Indexing CRUD API and is available in Managed Fusion 5.9.11. This API provides improved functionality and aligns with Managed Fusion’s updated architecture, ensuring consistency across versions.
-
The Word2Vec Model Training Job, which trains a shallow neural model to generate vector embeddings for text data, is deprecated in Managed Fusion 5.9.11. It will be removed in a future release no later than September 4, 2025.
Lucidworks offers a wide array of AI solutions, including Lucidworks AI. Lucidworks AI provides easy-to-use, AI-powered data discovery and search capabilities including: * Pre-trained embedding models * Custom embedding model training * Seamless integration with Managed Fusion
Removals
For more information, see Deprecations and Removals.Bitnami removal
By August 28, 2025, Fusion’s Helm chart will reference internally built open-source images instead of Bitnami images due to changes in how they host images.Platform Support and Component Versions
Kubernetes platform support
Lucidworks has tested and validated support for the following Kubernetes platform and versions:- Google Kubernetes Engine (GKE): 1.28, 1.29, 1.30, 1.31
Component versions
The following table details the versions of key components that may be critical to deployments and upgrades.Component | Version |
---|---|
Solr | fusion-solr 5.9.11 (based on Solr 9.6.1) |
ZooKeeper | 3.9.1 |
Spark | 3.4.1 |
Ingress Controllers | Nginx, Ambassador (Envoy), GKE Ingress Controller Istio not supported. |