June 17, 2025
product_id
field as the collapse field to group all versions or SKUs of a product into a single search result.
You can also control how Fusion selects the variation that represents the collapsed group; the default is the one most relevant to the user’s query.
For example, a user who searches for “red shoes” sees all of the red variations of shoes first, with the option to drill down and see all the variations.
sales_rank
or popularity_score
.jwkSetTimeout
variable in the JWT Realm settings, enabling better control over how long Fusion waits for a response when retrieving a JSON Web Key (JWK) set.
This improves authentication reliability in environments where key providers may respond slowly.
By increasing the default 500 ms timeout as needed (for example, to 2000 ms), you can reduce the risk of failed authentication due to network latency or external service delays.
You can configure this in the Fusion UI under the System > Access Control > Security Realms tab.
Alternatively, you can set this in the security.initial-realm-configs
spring boot properties:
min-max
or max-scale
quantization methods directly in the pipeline configuration interface for the LWAI vectorization stages:
To select the quantization method, go to Model Config in the LWAI pipeline stage configuration and enter the vectorQuantizationMethod
parameter with the value for the desired method:
Deploy Fusion at Scale
setup_f5_*.sh
scripts are handy for getting started and proof-of-concept purposes, this article covers the planning process for building a production-ready environment.gcloud
or aws
, and kubectl
.
See the platform-specific instructions linked above, or check with your cloud provider.-c
arg)fusion-cloud-native
repository: git clone https://github.com/lucidworks/fusion-cloud-native
customize_fusion_values.sh
script.
--help
parameter to see script usage details.File | Description |
---|---|
<provider>_<cluster>_<namespace>_fusion_values.yaml | Main custom values YAML used to override Helm chart defaults for Fusion microservices. |
<provider>_<cluster>_<namespace>_monitoring_values.yaml | Custom values yaml used to configure Prometheus and Grafana. |
<provider>_<cluster>_<namespace>_fusion_resources.yaml | Resource requests and limits for all Microservices. |
<provider>_<cluster>_<namespace>_fusion_affinity.yaml | Pod affinity rules to ensure mulitple replicas for a single service are evenly distributed across zones and nodes. |
<provider>_<cluster>_<namespace>_upgrade_fusion.sh | Script used to install and/or upgrade Fusion using the aforementioned custom values YAML files. |
<provider>_<cluster>_<release>_fusion_values.yaml
file to familiarize yourself with its structure and contents. Notice it contains a separate section for each of the Fusion microservices. The example configuration of the query-pipeline
service below illustrates some important concepts about the custom values YAML file.
<provider>_<cluster>_<namespace>_fusion_values.yaml
. For example, gke_search_f5_fusion_values.yaml
.Parameter | Description |
---|---|
<provider> | The K8s platform you’re running on, such as gke . |
<cluster> | The name of your cluster. |
<namespace> | The K8s namespace where you want to install Fusion. |
<node_selector> | Specifies a nodeSelector label to find nodes to schedule Fusion pods on. |
--node-pool <node_selector>
label is very important. Using the wrong value will cause your pods to be stuck in the pending
state. If you’re not sure about the correct value for your cluster, pass ’’` to let Kubernetes decide which nodes to schedule Fusion pods on.nodeSelector
labels are provider-specific. The fusion-cloud-native
scripts use the following defaults for GKE and EKS:Provider | Default node selector |
---|---|
GKE | cloud.google.com/gke-nodepool: default-pool |
EKS | alpha.eksctl.io/nodegroup-name: standard-workers |
values.yaml
file to avoid a known issue that prevents the kuberay-operator
pod from launching successfully: yaml kuberay-operator: crd: create: true
Flag | Description |
---|---|
--node-pool | Add a Fusion specific label to your nodes. |
--with-resource-limits | Configure resource requests/limits. |
--with-replicas | Configure replica counts. |
--with-affinity-rules | Configure pod affinity rules for Fusion services. |
--node-pool
to add a Fusion specific label to your nodes by doing:--node-pool 'fusion_node_type: <NODE_LABEL>'
.storageClass.yaml
with the following contents:nodePools
property. If any property for that statefulset needs to be changed from the default set of values, then it can be set directly on the object representing the node pool, any properties that are omitted are defaulted to the base value. See the following example (additional whitespace added for display purposes only):""
is the suffix for the default partition.fusion_node_type=analytics
. You can use the fusion_node_type
property in Solr auto-scaling policies to govern replica placement during collection creation.fusion_node_type=search
.nodePools
section above.nodePools
value ""
.replicaCount
, or number of Solr pods, is six. The search partition replicaCount
is twelve.Each nodePool is automatically be assigned the -Dfusion_node_type property of <search>
, <system>
, or <analytics>
. This value matches the name of the nodePool. For example, -Dfusion_node_type=<search>
.The Solr pods have a fusion_node_type
system property, as shown below:--set global.networkPolicyEnabled=true
when installing the Fusion Helm chart.envoy
.envoy
. You need at least 100GB of free disk for Docker.envoy
’s local registry. For example, to pull the query pipeline image, run docker pull lucidworks/query-pipeline:5.9.0
. See docker pull --help
for more information about pulling Docker images.envoy
to the private Docker registry, most likely via a VPN connection. In this example, the private Docker registry is referred to as <internal-private-registry>
.envoy
’s Docker registry to the private registry. This will take a long time.
imagePullSecrets
setting using custom values YAML. However, other 3rd party services—including Zookeeper, Pulsar, Prometheus, and Grafana—don’t allow you to supply the pull secret using the custom values YAML.To patch the default service account for your namespace and add the pull secret, run the following:\
) or reverse the order of single and double quotes:<internal-private-secret>
with the name of the secret you created in the steps above.customcerts.yaml
file is the example file in these instructions.
EXAMPLE-VALUES-FILE.yaml
with your previous values file.
init-container
with the name import-certs
.
.crt
file in $fusion_home/apps/jetty/connectors/etc/yourcertname.crt
:$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
.crt`` file in
$fusion_home\apps\jetty\connectors\etc\yourcertname.crt“:customize_fusion_values.sh
script, run it using BASH:customize_fusion_values.sh
script with the --prometheus true
option. This creates an extra custom values YAML file for installing Prometheus and Grafana, <provider>_<cluster>_<namespace>_monitoring_values.yaml
. For example: gke_search_f5_monitoring_values.yaml
.install_prom.sh
script to install Prometheus & Grafana in your namespace. Include the provider, cluster name, namespace, and helm release as in the example below:
--help
parameter to see script usage details.install_prom.sh
script.job-config
service after ZooKeeper disruptions./actuator/health
endpoint correctly reflects the actual status of the job-config
service, even after temporary ZooKeeper unavailability.
This prevents false DOWN
reports that could affect monitoring or automated recovery systems.
fusion-spark-3.2.2
image to resolve a Fabric8 token refresh bug.fusion-spark-3.2.2
run reliably in Kubernetes environments that require token-based authentication.
The state should never be null
.
Fusion now properly resets internal job state, ensuring that failed jobs can be restarted reliably.
connectors-backend
service.
Jobs that previously failed with The state should never be null
can now complete successfully.
job-config
handling issue that affected pre-existing app configurations.
Fusion 5.9.13 resolves this issue so that new schedules are reliably saved and acknowledged as expected.
job-config
service to ensure scheduled jobs run as expected.web-apps
service to improve security and ensure compatibility with token authentication behavior on modern Kubernetes platforms.
In Fusion 5.9.13, the job-config service may falsely report as “down” in the Fusion UI, particularly during startup or in TLS-enabled deployments. This issue is fixed in Fusion 5.9.14.
In Fusion 5.9.13, strict validation in the job-config
service causes “Collection not found” errors when jobs or V2 datasources target Fusion collections that point to differently named Solr collections.
This issue is fixed in Fusion 5.9.14.
As a workaround, use V1 datasources or avoid using REST call jobs on remapped collections.
In Fusion 5.9.13, saving a large query pipeline during high query volume can result in thread lock, excessive memory use, and eventual OOM errors in the Query service. This issue is fixed in Fusion 5.9.14.
Clicking the Stop button has no effect in some cases where the backend job is no longer being tracked. This causes the job-config
service to ignore the job and prevents the system from updating the job status. A workaround is to issue a POST {"action": "start"}
to the appropriate job actions endpoint, which aborts the stuck job.
This issue is fixed in Fusion 5.9.14.
In Fusion 5.9.12 and 5.9.13, Spark jobs may vanish from the job list if a Spark driver pod is deleted. This behavior can cause confusion and require a job-launcher restart to restore job visibility.
This issue is fixed in Fusion 5.9.13.
kafka-exporter
kafka-exporter
and renamed the docker repository from bitnami/kafka-exporter
to bitnami/kafka-exporter-archived
.
If you were using kafka-exporter
, then you need to update the repository name in your values.yaml
file:
X-XSS-Protection
header from session API responsesX-XSS-Protection
HTTP response header from the session API.
This header is no longer supported by modern browsers and has no effect on security behavior.
Its removal helps avoid confusion during security audits and aligns with current web security standards.
Component | Version |
---|---|
Solr | fusion-solr 5.9.13 (based on Solr 9.6.1) |
ZooKeeper | 3.9.1 |
Spark | 3.4.1 |
Ingress Controllers | Nginx, Ambassador (Envoy), GKE Ingress Controller |
Ray | ray[serve] 2.42.1 |