vectorDepth
(Number of Vector Results) sets the number of vector results to return from the vector portion of the hybrid query.
Increasing vectorDepth
retrieves more vector results but may increase query time.
Lowering it speeds up search but may reduce result diversity.vecPreFilterBoolean
(Block pre-filtering) indicates whether to prevent pre-filtering.
Pre-filtering can improve performance, while blocking it can yield more accurate facet counts and search results.Configure Remote V2 Connectors
remote-connectors
or admin
role.remote-connectors
role by default, you can create one. No API or UI permissions are required for the role.values.yaml
file, configure this section as needed:enabled
to true
to enable the backend ingress.
pathtype
to Prefix
or Exact
.
path
to the path where the backend will be available.
host
to the host where the backend will be available.
ingressClassName
to one of the following:
nginx
for Nginx Ingress Controlleralb
for AWS Application Load Balancer (ALB)logging.config
property is optional. If not set, logging messages are sent to the console.plain-text
to true
.connectors-backend
pod shuts down and is replaced by a new pod. Once the connector shuts down, connector configuration and job execution are disabled. To prevent that from happening, you should restart the connector as soon as possible.You can use Linux scripts and utilities to restart the connector automatically, such as Monit.max-grpc-retries
bridge parameters.job-expiration-duration-seconds
parameter. The default value is 120
seconds.Use Tika Asynchronous Parsing
parser_
to fields added to a document. System fields, which start with \_lw_
, are not prepended with parser_
. If you are migrating to asynchronous Tika parsing, and your search application configuration relies on specific field names, update your search application to use the new fields.docs_counter_i
with an increment value of 1
is added:
api-gateway
automatically reroute. Internal API users need only a simple update to continue tracking jobs accurately.admin
service to the new job-config
service:
api-gateway
service, you do not need to make any changes; the endpoints above are automatically rerouted to the new job-config
service.
See Job Config API for reference information about these endpoints.
admin
service using any of the endpoints above, you must update your API calls to point to the new job-config
service.By upgrading the Kubernetes Client library to version 6.2.0, this update prevents token refresh failures that previously caused service disruptions. Affected services—including connectors backend, indexing, job rest server, and job launcher services—now operate reliably in OIDC-enabled AKS and EKS environments, strengthening Fusion’s stability on modern Kubernetes deployments.
In Fusion 5.9.x versions through 5.9.13, saving a large query pipeline during high query volume can result in thread lock, excessive memory use, and eventual OOM errors in the Query service.
This issue is fixed in Fusion 5.9.14.
job-config
service.
In some cases, users without full permissions can create or modify scheduled tasks that execute actions they aren’t authorized to run directly, specifically tasks that invoked Solr using solr://
URIs.
These jobs may fail silently or prevent schedule changes from being saved. This issue can also affect the execution of jobs after upgrade. It is fixed in Fusion 5.9.13.
When using ArgoCD to deploy Fusion 5.9.10 or 5.9.11 with TLS options enabled, Helm chart rendering fails due to the use of the lookup
function, which is unsupported by ArgoCD. This prevents ArgoCD from generating manifests, blocking deployment workflows that rely on TLS configuration.
This issue is fixed in Fusion 5.9.12.
As a workaround, deploy Fusion without enabling TLS in ArgoCD-managed environments, or perform the deployment using Helm directly.
Fusion 5.9.11’s Helm chart incorrectly references an internal Lucidworks Artifactory repository for the Solr image. This can prevent successful deployment in environments without access to internal infrastructure.
This issue is fixed in Fusion 5.9.12.
As a workaround, override the Solr image repository in your custom values.yaml
file.
In Fusion 5.9.11, job timestamps returned by the API may be off by 12 hours if the time is after noon UTC. This affects scheduling accuracy and may cause the UI to misinterpret whether a change has been made.
This issue is fixed in Fusion 5.9.12.
In Fusion 5.9.11, scheduled job triggers cannot be modified due to incorrect timestamp comparison logic in the Admin UI. You must delete and recreate the trigger instead.
This issue is fixed in Fusion 5.9.12.
If all job-config
pods lose connection to ZooKeeper in Fusion 5.9.11, they may fail to re-elect a leader, halting all scheduling until one of the pods is manually restarted.
This issue is fixed in Fusion 5.9.12.
In Fusion 5.9.11, datasource jobs started in the UI incorrectly show default-subject
as the initiating user in job history instead of the actual user.
This issue is fixed in Fusion 5.9.12.
In Fusion 5.9.11, new apps save Solr schemas with the name managed-schema.xml
instead of managed-schema
, while older apps may include only a managed-schema
file.
This mismatch can cause errors when editing or previewing schemas using the Schema API, particularly when the expected file extension is missing.
For existing config sets, it’s best to copy the contents from managed-schema
to a new managed-schema.xml
, then delete managed-schema
.
Fusion 5.9.12 resolves this by supporting both file names, ensuring backward compatibility with older apps and consistent behavior for new ones.
job-config
pods lose connection to ZooKeeper.
In rare cases, when a ZooKeeper connection is lost, the leader latch mechanism used by the job-config
service might fail to recover. If no node can re-establish leadership, the job scheduler stops until at least one job-config
pod is restarted.
This issue is fixed in Fusion 5.9.13.
While the Admin UI displays the current trigger time correctly, changes to the time are not saved. This is due to a mismatch in how the API formats UTC timestamps, especially for times after 12:00 UTC, which prevents the UI from detecting that a change has been made.
To work around this, delete the existing scheduler entry and create a new one with the desired time.
This issue is resolved in Fusion 5.9.13.
Jobs configured to run based on the success or failure of another job may not trigger as expected. This includes configurations using “on_success_or_failure” and “Start + Interval” options. This issue is resolved in Fusion 5.9.13.
This issue is fixed in Fusion 5.9.12.
This issue is fixed in Fusion 5.9.12.
To work around this issue, configure each part of the indexing process separately instead of using the Index Workbench:
Lucidworks offers a wide array of AI solutions, including Lucidworks AI. Lucidworks AI provides easy-to-use, AI-powered data discovery and search capabilities including:
Component | Version |
---|---|
Solr | fusion-solr 5.9.11 (based on Solr 9.6.1) |
ZooKeeper | 3.9.1 |
Spark | 3.4.1 |
Ingress Controllers | Nginx, Ambassador (Envoy), GKE Ingress Controller Istio not supported. |