Skip to main content
Fusion 5.9.13 is a maintenance release that introduces advanced SKU grouping with Solr collapse, custom certificates for indexing and querying services, and compatibility with Kubernetes 1.32. Fusion 5.9.13 also improves authentication resilience with a configurable JWT timeout, and resolves key scheduling and security bugs to ensure greater stability and compliance in enterprise environments.
For supported Kubernetes versions and key component versions, see Platform support and component versions.

What’s new

Unidirectional multi-region Solr for self hosted Fusion (CrossDC)

This release introduces built-in support for Apache Solr’s CrossDC (Cross Data Center) replication framework, enabling seamless uni-directional synchronization of Solr updates between data centers. This feature is now integrated into Fusion’s packaging to reduce the operational complexity and cost of implementing high availability and disaster recovery in self-hosted environments. With CrossDC, Solr update requests (including indexing, collection, and configset changes) from a primary cluster are mirrored to a secondary cluster using Apache Kafka. The new support includes:
  • A preconfigured Solr plugin for mirroring updates directly from the source Fusion cluster
  • A dedicated CrossDC consumer application for replaying those updates on the target cluster
  • Centralized configuration support with Solr ZooKeeper
  • Optional dead-letter queue handling for failed requests
  • Support for selective replication using command whitelisting
This update eliminates the need for custom configuration to enable CrossDC and ensures Fusion is ready for geo-redundant or hybrid-cloud architectures out of the box. For complete details, see Cross Data Center Replication. This initial release supports uni-directional replication only, from a source to a target Solr cluster.

Expanded support for collapsed search results

Now Fusion gives you access to all of the available Solr settings for collapsing search results, giving you finer control over how Fusion groups variations of each item into a single search result. You can use collapse to improve conversion rates and customer satisfaction by streamlining search results, reducing cognitive load, and surfacing the most relevant product variations first. For example, you can use a product_id field as the collapse field to group all versions or SKUs of a product into a single search result. You can also control how Fusion selects the variation that represents the collapsed group; the default is the one most relevant to the user’s query. For example, a user who searches for “red shoes” sees all of the red variations of shoes first, with the option to drill down and see all the variations. Products collapsed by product_id with the red SKU selected Additional capabilities include:
  • Faceting compatibility: Facets can reflect counts based on collapsed groups instead of individual SKUs.
  • Sorting options: Choose how the representative SKU is selected using sort fields like sales_rank or popularity_score.
  • Expand support: Optional expansion of collapsed groups allows users to see all SKUs for a product on demand.
  • Commerce Studio integration: Merchandising actions such as pinning, boosting, and burying now apply to the entire product group, not just individual SKUs.
  • Query Workbench support: You can preview collapsed and expanded result sets directly in Query Workbench for easy validation.
This update eliminates the need for custom collapse implementations and makes SKU/product rollup behavior a first-class capability in Fusion. For complete details about the new configuration options, see the Query Fields stage configuration reference.

Kubernetes 1.32 support for better security and long-term compatibility

Fusion 5.9.13 introduces full compatibility with Kubernetes version 1.32, ensuring seamless deployment and operation on the latest Kubernetes platforms. This update allows you to take advantage of the latest stability, performance, and security improvements in Kubernetes, including better control over sidecar container behavior and improvements to admission webhooks and scheduling logic. By supporting Kubernetes 1.32, Fusion stays aligned with cloud provider upgrades and helps future-proof your infrastructure, especially on managed services like AKS, EKS, and GKE.

Improved JWT authentication resilience with configurable timeout

Fusion now allows you to configure the jwkSetTimeout variable in the JWT Realm settings, enabling better control over how long Fusion waits for a response when retrieving a JSON Web Key (JWK) set. This improves authentication reliability in environments where key providers may respond slowly. By increasing the default 500 ms timeout as needed (for example, to 2000 ms), you can reduce the risk of failed authentication due to network latency or external service delays. You can configure this in the Fusion UI under the System > Access Control > Security Realms tab. Alternatively, you can set this in the security.initial-realm-configs spring boot properties:
security:
  initial-realm-configs:
    realmType: jwt
    enabled: true
    name: jwt_okta
    config:
      autoCreateUsers: true
      jwtIssuer: https://HOSTNAME/oauth2/default
      jwkSetUri: https://HOSTNAME/oauth2/default/v1/keys
      jwkSetTimeout: 2000
    roleNames:
      - developer

Configurable vector quantization method in LWAI pipeline stages

Fusion 5.9.13 adds vector quantization in certain Lucidworks AI (LWAI) pipeline stages, making it easier to reduce memory usage and accelerate vector search without sacrificing quality. Quantization converts high-precision float vectors into compact 8-bit integer vectors, significantly lowering storage and compute costs. Now you can choose between min-max or max-scale quantization methods directly in the pipeline configuration interface for the LWAI vectorization stages: To select the quantization method, go to Model Config in the LWAI pipeline stage configuration and enter the vectorQuantizationMethod parameter with the value for the desired method: Vector quantization method configuration in an LWAI pipeline stage

Custom certificates for indexing and querying services

Fusion 5.9.13 introduces the ability add custom certificates for indexing and querying services, making it easier to align to your organization’s specific security requirements. This feature allows a Helm chart update to support persistent custom certificates and adding them into truststores during pod startup. To add a custom certificate, create a new YAML file for your custom certificates and edit it to include your indexing or querying certificates. You must use different YAML files in order to use different certificates for indexing and querying services. See Deploy Fusion at Scale for full instructions, including the Helm chart update.
Before you begin, see Fusion Server Deployment to understand the architecture and requirements.This article explains how to plan and execute a Fusion deployment at the scale required for staging or production.While the setup_f5_*.sh scripts are handy for getting started and proof-of-concept purposes, this article covers the planning process for building a production-ready environment.
LucidAcademyLucidworks offers free training to help you get started.The Course for Preparing for Fusion Implementation focuses on the key elements for a successful implementation, defining your business requirements, preparing clean data, and involving the right personnel:
$2Play Button
Visit the LucidAcademy to see the full training catalog.

Prerequisites

You must meet the following prerequisites before you can customize your Fusion cluster:
  • A local copy of the fusion-cloud-native repository. This must be up-to-date with the latest master branch.
  • Any cloud provider-specific command line tools, such as gcloud or aws, and kubectl.
    See the platform-specific instructions linked above, or check with your cloud provider.
  • Helm v3
    • To install on a Mac:
    brew upgrade kubernetes-helm
    
    • For other operating systems, download from Helm Releases.
    • Verify your installation:
    helm version --short
    v3.0.0+ge29ce2a
    
  • Kubernetes namespace
    • Collect the following information about your Kubernetes environment:
      • CLUSTER: Cluster name (passed to our setup scripts using the -c arg)
      • NAMESPACE: Kubernetes namespace where to install Fusion; a namespace should only contain lowercase letters (a-z), digits (0-9), or dash. No periods or underscores allowed.
  • (optional) Clarify your organization’s DockerHub policy. The Fusion Helm chart points to public Docker images on DockerHub. Your organization may not allow Kubernetes to pull images directly from DockerHub or may require extra security scanning before loading images into production clusters.
    Consult your Kubernetes and Docker admin team to find how to get the Fusion images loaded into a registry that’s accessible to your cluster. You can update the image for each service using the custom values YAML file.
Kubernetes namespace tips
  • Fusion 5 service discovery requires all services for the same release be deployed in the same namespace. Moreover, you should only run one instance of Fusion in a namespace. If you need multiple instances of Fusion running in the same Kubernetes cluster, then you need to deploy them in separate namespaces.
  • If your organization requires CPU / Memory quotas for namespaces, you can start with a minimum of 12 CPU and 45GB of RAM (such as 3 x n1-standard-4 on GKE), but you will need to increase the quotas once you start load testing Fusion with production workloads and real datasets.
  • Fusion requires at least 3 ZooKeeper nodes and 2 Solr nodes to achieve high availability.

Custom values YAML file

  1. Clone the fusion-cloud-native repository: git clone https://github.com/lucidworks/fusion-cloud-native
  2. Run the customize_fusion_values.sh script.
    ./customize_fusion_values.sh  --provider <provider> -c <cluster> -n <namespace> \
     --num-solr 3 \
     --solr-disk-gb 100 \
     --node-pool <node_selector> \
     --prometheus true \
     --with-resource-limits \
     --with-affinity-rules
    
    Pass the --help parameter to see script usage details.
    The script creates the following files:
    FileDescription
    <provider>_<cluster>_<namespace>_fusion_values.yamlMain custom values YAML used to override Helm chart defaults for Fusion microservices.
    <provider>_<cluster>_<namespace>_monitoring_values.yamlCustom values yaml used to configure Prometheus and Grafana.
    <provider>_<cluster>_<namespace>_fusion_resources.yamlResource requests and limits for all Microservices.
    <provider>_<cluster>_<namespace>_fusion_affinity.yamlPod affinity rules to ensure multiple replicas for a single service are evenly distributed across zones and nodes.
    <provider>_<cluster>_<namespace>_upgrade_fusion.shScript used to install and/or upgrade Fusion using the aforementioned custom values YAML files.
    For an explanation of these placeholder values, see Configuration Values below.
  3. Add the new files to version control. You will make changes to it over time as you fine-tune your Fusion installation. You will also need it to perform upgrades. If you try to upgrade your Fusion installation and don’t provide the custom values YAML, your deployment will revert to chart defaults.
    Review the <provider>_<cluster>_<release>_fusion_values.yaml file to familiarize yourself with its structure and contents. Notice it contains a separate section for each of the Fusion microservices. The example configuration of the query-pipeline service below illustrates some important concepts about the custom values YAML file.
    query-pipeline: 
    enabled: true ②
    nodeSelector: 
        cloud.google.com/gke-nodepool: default-pool
    javaToolOptions: "..."
    pod: 
        annotations:
        prometheus.io/port: "8787"
        prometheus.io/scrape: "true"
        prometheus.io/path: "/actuator/prometheus"
    
① Service-specific setting overrides under the top-level heading
② Every Fusion service has an implicit enabled flag that defaults to true, set to false to remove this service from your cluster
③ Node selector identifies the label find nodes to schedule pods on
④ Used to pass JVM options to the service
⑤ Pod annotations to allow Prometheus to scrape metrics from the service
Once we go through all of the configuration topics in this topic, you’ll have a well-configured custom values YAML file for your Fusion 5 installation. You’ll then use this file during the Helm v3 installation at the end of this topic.

Deployment-specific values

The script creates a custom values YAML file using the naming convention: <provider>_<cluster>_<namespace>_fusion_values.yaml. For example, gke_search_f5_fusion_values.yaml.
ParameterDescription
<provider>The K8s platform you’re running on, such as gke.
<cluster>The name of your cluster.
<namespace>The K8s namespace where you want to install Fusion.
<node_selector>Specifies a nodeSelector label to find nodes to schedule Fusion pods on.
Providing the correct --node-pool <node_selector> label is very important. Using the wrong value will cause your pods to be stuck in the pending state. If you’re not sure about the correct value for your cluster, pass ’’` to let Kubernetes decide which nodes to schedule Fusion pods on.
Default nodeSelector labels are provider-specific. The fusion-cloud-native scripts use the following defaults for GKE and EKS:
ProviderDefault node selector
GKEcloud.google.com/gke-nodepool: default-pool
EKSalpha.eksctl.io/nodegroup-name: standard-workers
If you are deploying Fusion 5.9.12, add the following to your values.yaml file to avoid a known issue that prevents the kuberay-operator pod from launching successfully: yaml kuberay-operator: crd: create: true

Flags

The script provides flags for additional configuration:
FlagDescription
--node-poolAdd a Fusion specific label to your nodes.
--with-resource-limitsConfigure resource requests/limits.
--with-replicasConfigure replica counts.
--with-affinity-rulesConfigure pod affinity rules for Fusion services.
Use --node-pool to add a Fusion specific label to your nodes by doing:
kubectl label <NODE_ID> fusion_node_type=<NODE_LABEL>
Then, pass --node-pool 'fusion_node_type: <NODE_LABEL>'.

Configure Solr sizing

When you’re ready to build a production-ready setup for Fusion 5, you need to customize the Fusion Helm chart to ensure Fusion is well-configured for production workloads.You’ll be able to scale the number of nodes for Solr up and down after building the cluster, but you need to establish the initial size of the nodes (memory and CPU) and the size and type of disks you need.See the example config below to learn which parameters to change in the custom values YAML file.
solr:
  resources:                    # Set resource limits for Solr to help K8s pod scheduling;
    limits:                     # these limits are not just for the Solr process in the pod,
      cpu: "7700m"              # so allow ample memory for loading index files into the OS cache (mmap)
      memory: "26Gi"
    requests:
      cpu: "7000m"
      memory: "25Gi"
  logLevel: WARN
  nodeSelector:
    fusion_node_type: search    # Run this Solr StatefulSet in the "search" node pool
  exporter:
    enabled: true               # Enable the Solr metrics exporter (for Prometheus) and
                                # schedule on the default node pool (system partition)
    podAnnotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9983"
      prometheus.io/path: "/metrics"
    nodeSelector:
      cloud.google.com/gke-nodepool: default-pool
  image:
    tag: 8.4.1
  updateStrategy:
    type: "RollingUpdate"
  javaMem: "-Xmx3g -Dfusion_node_type=system" # Configure memory settings for Solr
  solrGcTune: "-XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+UseStringDeduplication -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=150 -XX:+UseLargePages -XX:+AlwaysPreTouch"
  volumeClaimTemplates:
    storageSize: "100Gi"        # Size of the Solr disk
  replicaCount: 6               # Number of Solr pods to run in this StatefulSet

zookeeper:
  nodeSelector:
    cloud.google.com/gke-nodepool: default-pool
  replicaCount: 3               # Number of Zookeepers
  persistence:
    size: 20Gi
  resources: {}
  env:
    ZK_HEAP_SIZE: 1G
    ZOO_AUTOPURGE_PURGEINTERVAL: 1
To be clear, you can tune GC settings and number of replicas after the cluster is built. But changing the size of the persistent volumes is more complicated so you should try to pick a good size initially.

Configure storage class for Solr pods (optional)

If you wish to run with a storage class other than the default you can create a storage class for your Solr pods before you install. For example, to create regional disks in GCP you can create a file called storageClass.yaml with the following contents:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: solr-gke-storage-regional
provisioner: kubernetes.io/gce-pd
parameters:
 type: pd-standard
 replication-type: regional-pd
 zones: us-west1-b, us-west1-c
and then provision into your cluster by calling:
kubectl apply -f storageClass.yaml
to then have Solr use the storage class by adding the following to the custom values YAML:
solr:
  volumeClaimTemplates:
    storageClassName: solr-gke-storage-regional
    storageSize: 250Gi
We’re not advocating that you must use regional disks for Solr storage, as that would be redundant with Solr replication. We’re just using this as an example of how to configure a custom storage class for Solr disks if you see the need. For instance, you could use regional disks without Solr replication for write-heavy type collections.

Configure multiple node pools

Lucidworks recommends isolating search workloads from analytics workloads using multiple node pools. The included scripts do not do this for you; this is a manual process.See the example script for GKE, see create_gke_cluster_node_pools.sh.In the custom values YAML file, you can add additional Solr StatefulSets by adding their names to the list under the nodePools property. If any property for that statefulset needs to be changed from the default set of values, then it can be set directly on the object representing the node pool, any properties that are omitted are defaulted to the base value. See the following example (additional whitespace added for display purposes only):
solr:
  nodePools:
    - name: ""
    - name: "analytics"
      javaMem: "-Xmx6g"
      replicaCount: 6
      storageSize: "100Gi"
      nodeSelector:
        fusion_node_type: analytics ③
      resources:
        requests:
          cpu: 2
          memory: 12Gi
        limits:
          cpu: 3
          memory: 12Gi
    - name: "search"
      javaMem: "-Xms11g -Xmx11g"
      replicaCount: 12
      storageSize: "50Gi"
      nodeSelector:
        fusion_node_type: search ⑤
      resources:
        limits:
          cpu: "7700m"
          memory: "26Gi"
        requests:
          cpu: "7000m"
          memory: "25Gi"
  nodeSelector:
    cloud.google.com/gke-nodepool: default-pool ⑥
...
① The empty string "" is the suffix for the default partition.
② Overrides the settings for the analytics Solr pods.
③ Assigns the analytics Solr pods to the node pool and attaches the label fusion_node_type=analytics. You can use the fusion_node_type property in Solr auto-scaling policies to govern replica placement during collection creation.
④ Overrides the settings for the search Solr pods.
⑤ Assigns the search Solr pods to the node pool and attaches the label fusion_node_type=search.
⑥ Sets the default settings for all Solr pods, if not specifically overridden in the nodePools section above.
Do not edit the nodePools value "".
In the example above, the analytics partition replicaCount, or number of Solr pods, is six. The search partition replicaCount is twelve.Each nodePool is automatically be assigned the -Dfusion_node_type property of <search>, <system>, or <analytics>. This value matches the name of the nodePool. For example, -Dfusion_node_type=<search>.The Solr pods have a fusion_node_type system property, as shown below:fusion_node_type system property

Solr auto-scaling policy

Use replica placement plugins to control how replicas are placed in Solr.

Pod network policy

A Kubernetes network policy governs how groups of pods communicate with each other and other network endpoints. With Fusion, all incoming traffic flows through the API Gateway service. All Fusion services in the same namespace expect an internal JWT, which is supplied by the Gateway, as part of the request. As a result, Fusion services enforce a basic level of API security and don’t need an additional network policy to protect them from other pods in the cluster.To install the network policy for Fusion services, pass --set global.networkPolicyEnabled=true when installing the Fusion Helm chart.

On-premises private Docker registries

For on-premises Kubernetes deployments, your organization may not allow Kubernetes to pull Fusion’s Docker images from DockerHub. See the instructions below for details on using a private Docker registry with Fusion. These are general instructions that may need to be adapted to work within your organization’s security policies:
  1. Transfer the public images from DockerHub to your private Docker registry.
  2. Establish a workstation that has access to DockerHub. This workstation must connect to your internal Docker registry, most likely via VPN connection. In this example, the workstation is referred to as envoy.
  3. Install Docker on envoy. You need at least 100GB of free disk for Docker.
  4. Pull all of the images from DockerHub to envoy’s local registry. For example, to pull the query pipeline image, run docker pull lucidworks/query-pipeline:5.9.0. See docker pull --help for more information about pulling Docker images.
  5. Establish a connection from envoy to the private Docker registry, most likely via a VPN connection. In this example, the private Docker registry is referred to as <internal-private-registry>.
  6. Push the images from envoy’s Docker registry to the private registry. This will take a long time.
    1. You’ll need to re-tag all images for the internal registry. For example, to tag the query-pipeline image, run:
    docker tag lucidworks/query-pipeline:5.9.0 <internal-private-registry>/query-pipeline:5.9.0
    
    1. Push each image to the internal repo:
    docker push <internal-private-registry>/query-pipeline:5.9.0
    
  7. Install the Docker registry secret in Kubernetes. Create the Docker registry secret in the Kubernetes namespace where you want to install Fusion:
    SECRET_NAME=<internal-private-secret>
    REPO=<internal-private-registry>
    
    kubectl create secret docker-registry "${SECRET_NAME}" \
     --namespace "${NAMESPACE}" \
     --docker-server="${REPO}" \
     --docker-username=${REPO_USER} \
     --docker-password=${REPO_PASS} \
     --docker-email=${REPO_USER}
    
    For details, see the Kubernetes article Pull an Image from a Private Registry.
  8. Update the custom values YAML for your cluster to point to your private registry and secret to allow Kubernetes to pull images. For example:
    query-pipeline:
     image:
       imagePullSecrets:
         - name: <internal-private-secret>
       repository: <internal-private-registry>
    
    Repeat the process for all Fusion services.

Customize Helm Chart

Every Fusion service allows you to override the imagePullSecrets setting using custom values YAML. However, other 3rd party services—including Zookeeper, Pulsar, Prometheus, and Grafana—don’t allow you to supply the pull secret using the custom values YAML.To patch the default service account for your namespace and add the pull secret, run the following:
kubectl patch sa default -n $NAMESPACE \
  -p '"imagePullSecrets": [{"name": "<internal-private-secret>" }]'
In Windows using PowerShell or another CLI, you might have to escape the double quotes with a backslash (\) or reverse the order of single and double quotes:
kubectl patch sa default -n $NAMESPACE \
  -p "'imagePullSecrets': [{'name': '<internal-private-secret>'}]"
Replace <internal-private-secret> with the name of the secret you created in the steps above.
This allows the default service account to pull images from the private registry without specifying the pull secret on the resources directly.

Add additional trusted certificate(s) to Fusion’s indexing and querying services (optional)

You can add custom trusted certificates to support Fusion’s indexing and querying services. You may want to use custom trusted certificates if, for example, you have specific security requirements for data handling or need to support an existing infrastructure and its security needs. This method involves updating your Helm chart.If you want to add custom trusted certificates for both the indexing and querying services, follow these instructions twice: once for the indexing service, and once for the querying service. To add different certificates to the indexing and querying services, create one YAML file with the indexing service certificates and one YAML file for the querying service certificates before following these instructions.
You may use the same YAML file if you want to use the same certificates for both services.
To add custom trusted certificates:
  1. Create a new YAML file for your custom trusted certificates. The customcerts.yaml file is the example file in these instructions.
  2. Add the custom certificate(s) in the YAML file created in the previous step. For example:
    trustedCertificates:
     enabled: true
     files:
       some.cert: |-
         -----BEGIN CERTIFICATE-----
         MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
         (...)
         EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
         -----END CERTIFICATE-----
       other.cert: |-
         -----BEGIN CERTIFICATE-----
         MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
         (...)
         EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
         -----END CERTIFICATE---------
    
  3. Update the indexing or querying service by running the following Helm command. Replace EXAMPLE-VALUES-FILE.yaml with your previous values file.
    helm upgrade --install --namespace ${EXAMPLE-NAMESPACE} ${HELM-RELEASE} ${HELM-CHART-PATH} --values EXAMPLE-VALUES-FILE.yaml --values customcerts.yaml
    
  4. Verify the indexing or querying pod has a new init-container with the name import-certs.

Add additional trusted certificate(s) for connectors to allow crawling of web resources with SSL/TLS enabled (optional)

To crawl a datasource which for some reason is using a self-signed certificate, add arbitrary certificates to connectors. For example:
classic-rest-service:
  trustedCertificates:
    enabled: true
    files:
      some.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE-----
      other.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE---------
connector-plugin:
  trustedCertificates:
    enabled: true
    files:
      some.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE-----
      other.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE---------

Generating the certificate on linux command line

Use the following command to generate a .crt file in $fusion_home/apps/jetty/connectors/etc/yourcertname.crt:
openssl s_client -servername remote.server.net -connect remote.server.net:443 </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >$fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using Firefox web browser

  1. Navigate to the SharePoint host.
  2. Click the
    in the address bar, then click the
    icon.
  3. Next, navigate to More Information > View Certificate > Export.
    Save the file to the following folder: $fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using Chrome web browser

  1. Navigate to Chrome menu > More Tools > Developer Tools > Security Tab. This will display the Security overview.
  2. Click the View certificate button.
  3. Save the file to the following folder:
$fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using powershell

Use the following script to generate a .crt`` file in $fusion_home\apps\jetty\connectors\etc\yourcertname.crt“:
$fusion_home = c:\your\fusion\install\directory
$webRequest = [Net.WebRequest]::Create("https://your-hostname")
try { $webRequest.GetResponse() } catch {}
$cert = $webRequest.ServicePoint.Certificate
$bytes = $cert.Export([Security.Cryptography.X509Certificates.X509ContentType]::Cert)
set-content -value $bytes -encoding byte -path "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt"
certutil -encode "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt" "$fusion_home\apps\jetty\connectors\etc\yourcertname.crt"
rm "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt" -f

Install Fusion 5 on Kubernetes

At this point, you’re ready to install Fusion 5 using the custom values YAML files and upgrade script. If you used the customize_fusion_values.sh script, run it using BASH:
./gke_search_f5_upgrade_fusion.sh
Once the installation is complete, verify your Fusion installation is running correctly.

Monitoring Fusion with Prometheus and Grafana

Lucidworks recommends using Prometheus and Grafana for monitoring the performance and health of your Fusion cluster. Your operations team may already have these services installed. If not, install them into the Fusion namespace.
The Custom values YAML file shown above activates the Solr metrics exporter service and adds pod annotations so Prometheus can scrape metrics from Fusion services.
  1. Run the customize_fusion_values.sh script with the --prometheus true option. This creates an extra custom values YAML file for installing Prometheus and Grafana, <provider>_<cluster>_<namespace>_monitoring_values.yaml. For example: gke_search_f5_monitoring_values.yaml.
  2. Commit the YAML file to version control.
  3. Review its contents to ensure that the settings suit your needs. For example, decide how long you want to keep metrics. The default is 36 hours.
    See the Prometheus documentation and Grafana documentation for details.
  4. Run the install_prom.sh script to install Prometheus & Grafana in your namespace. Include the provider, cluster name, namespace, and helm release as in the example below:
    ./install_prom.sh --provider gke -c search -n f5 -r 5-5-1
    
    Pass the --help parameter to see script usage details.
    The Grafana dashboards from monitoring/grafana are installed automatically by the install_prom.sh script.

Support for pre-filtering in the Chunking Neural Hybrid Query stage

For parity with the Neural Hybrid Query stage, the Chunking Neural Hybrid Query Stage now supports pre-filtering. Pre-filtering can improve performance by reducing the number of chunks that need to be processed. However, in some cases it can also lead to less accurate facet counts and search results. Pre-filtering is blocked by default. You can enable it by unchecking the Block pre-filtering checkbox in the Chunking Neural Hybrid Query stage configuration.

Bug fixes

  • Corrected the health reporting behavior for the job-config service after ZooKeeper disruptions.
    Fusion now ensures the /actuator/health endpoint correctly reflects the actual status of the job-config service, even after temporary ZooKeeper unavailability.
    This prevents false DOWN reports that could affect monitoring or automated recovery systems.
  • Updated the fusion-spark-3.2.2 image to resolve a Fabric8 token refresh bug.
    The Fabric8 Kubernetes client in this Spark image has been patched to fix a bug that prevented token refresh under OIDC authentication.
    This ensures that Spark jobs using fusion-spark-3.2.2 run reliably in Kubernetes environments that require token-based authentication.
  • Fixed a bug that prevented Web V2 connector jobs from restarting after failure.
    In previous versions, if a job was interrupted (such as by scaling down the connector pod), the connectors-backend service could enter a corrupted state, preventing future runs of the same job with errors like The state should never be null.
    Fusion now properly resets internal job state, ensuring that failed jobs can be restarted reliably.
  • Fixed Web connector indexing failure caused by corrupted job state.
    Fusion 5.9.13 restores indexing functionality for the Webv2 connector (v2.0.1) by resolving an issue that caused a corrupted job state in the connectors-backend service.
    Jobs that previously failed with The state should never be null can now complete successfully.
  • Fixed an issue that prevented schedule changes from persisting for some datasources.
    In Fusion 5.9.12, clicking Save after configuring a new schedule for a datasource in the “Run” dialog could fail silently in certain apps, leaving the schedule unsaved with no warning to the user.
    This was due to a job-config handling issue that affected pre-existing app configurations.
    Fusion 5.9.13 resolves this issue so that new schedules are reliably saved and acknowledged as expected.
  • Fixed permission handling in the job-config service to ensure scheduled jobs run as expected.
    Fusion now correctly handles permission checks when creating or modifying scheduled jobs, preventing failures caused by mismatches between user and service account permissions.
    This resolves issues where job could not be scheduled or executed following upgrades.
  • Helm charts now support Kubernetes secrets for TLS keystore passwords.
    Fusion 5.9.13 updates the Helm charts to eliminate the use of plaintext passwords for TLS keystores. You can now securely manage the keystorePassword using a Kubernetes secret, aligning with hardened OpenShift and enterprise security policies.
  • Upgraded the Spring framework in the web-apps service to improve security and ensure compatibility with token authentication behavior on modern Kubernetes platforms.

Known issues

  • Job-config service may appear “down” in UI even when running correctly
    In Fusion 5.9.13, the job-config service may falsely report as “down” in the Fusion UI, particularly during startup or in TLS-enabled deployments.
    This issue is fixed in Fusion 5.9.14.
  • Jobs and V2 datasources may fail when Fusion collections are remapped to different Solr collections.
    In Fusion 5.9.13, strict validation in the job-config service causes “Collection not found” errors when jobs or V2 datasources target Fusion collections that point to differently named Solr collections.
    This issue is fixed in Fusion 5.9.14. As a workaround, use V1 datasources or avoid using REST call jobs on remapped collections.
  • Saving large query pipelines may cause OOM failures under high load.
    In Fusion 5.9.13, saving a large query pipeline during high query volume can result in thread lock, excessive memory use, and eventual OOM errors in the Query service.
    This issue is fixed in Fusion 5.9.14.
  • Some S3 and Web datasource jobs cannot be stopped in Fusion 5.9.13.
    Clicking the Stop button has no effect in some cases where the backend job is no longer being tracked. This causes the job-config service to ignore the job and prevents the system from updating the job status. A workaround is to issue a POST {"action": "start"} to the appropriate job actions endpoint, which aborts the stuck job.
    This issue is fixed in Fusion 5.9.14.
  • Spark jobs may disappear from the job list after pod deletion
    In Fusion 5.9.12 and 5.9.13, Spark jobs may vanish from the job list if a Spark driver pod is deleted. This behavior can cause confusion and require a job-launcher restart to restore job visibility.
    This issue is fixed in Fusion 5.9.13.

Deprecations and removals

For full details, see Deprecations and Removals.

Bitnami removal

Fusion 5.9.13 will be re-released with the same functionality but updated image references. In the meantime, Lucidworks will self-host the required images while we work to replace Bitnami images with internally built open-source alternatives. If you are a self-hosted Fusion customer, you must upgrade before August 28 to ensure continued access to container images and prevent deployment issues. You can reinstall your current version of Fusion or upgrade to Fusion 5.9.14, which includes the updated Helm chart and prepares your environment for long-term compatibility. See Prevent image pull failures due to Bitnami deprecation in Fusion 5.9.5 to 5.9.13 for more information on how to prevent image pull failures.

Hybrid Query pipeline stage

The Hybrid Query pipeline stage is now deprecated. Instead, use the Neural Hybrid Query stage, which combines lexical and vector search and includes improvements such as K-Nearest Neighbors (KNN), chunking, and more.

Container image for kafka-exporter

Bitnami stopped supporting kafka-exporter and renamed the docker repository from bitnami/kafka-exporter to bitnami/kafka-exporter-archived. If you were using kafka-exporter, then you need to update the repository name in your values.yaml file:
kafka.metrics.kafka.image.repository=bitnami/kafka-exporter-archived
Changing the repository name will allow you to continue using Kafka metrics evaluation with Prometheus/Grafana.

Removed deprecated X-XSS-Protection header from session API responses

Fusion 5.9.13 removes the deprecated X-XSS-Protection HTTP response header from the session API. This header is no longer supported by modern browsers and has no effect on security behavior. Its removal helps avoid confusion during security audits and aligns with current web security standards.

Platform support and component versions

Kubernetes platform support

Lucidworks has tested and validated support for the following Kubernetes platforms and versions:
  • Google Kubernetes Engine (GKE): 1.29, 1.30, 1.31, 1.32
  • Microsoft Azure Kubernetes Service (AKS): 1.29, 1.30, 1.31, 1.32
  • Amazon Elastic Kubernetes Service (EKS): 1.29, 1.30, 1.31, 1.32
Support is also offered for Rancher Kubernetes Engine (RKE and RKE2) and OpenShift 4 versions that are based on Kubernetes 1.29, 1.30, 1.31, 1.32; note that RKE2 may require some Helm chart modification. OpenStack and customized Kubernetes installations are not supported. For more information on Kubernetes version support, see the Kubernetes support policy.

Component versions

The following table details the versions of key components that may be critical to deployments and upgrades.
ComponentVersion
Solrfusion-solr 5.9.13 (based on Solr 9.6.1)
ZooKeeper3.9.1
Spark3.4.1
Ingress ControllersNginx, Ambassador (Envoy), GKE Ingress Controller
Rayray[serve] 2.42.1
More information about support dates can be found at Lucidworks Fusion Product Lifecycle.