Product Selector

Fusion 5.12
    Fusion 5.12

    Fusion 5 Upgrades

    This guide describes how to perform Fusion 5 upgrades.

    Before upgrading, be aware of changes by checking for Deprecations and Removals between versions.

    Lucidworks recommends upgrading to the next minor version only. For example, you should upgrade from Fusion 5.6.1 to Fusion 5.7.1 before upgrading to Fusion 5.8.0.

    The general upgrade process is described in this article. However, the specific upgrade procedures may vary depending on your upgrade path. For the most accurate instructions, please refer to the upgrade article specific to your upgrade.

    General upgrade process

    Fusion natively supports deployments on various Kubernetes platforms, including AKS, EKS, and GKE. You can also deploy on a different Kubernetes platform of your choice.

    Fusion includes an upgrade script for AKS, EKS, and GKE. This script is not generated for other Kubernetes deployments. In this case, see Other Kubernetes deployment upgrades.

    Upgrades differ from platform to platform. See below for more information about upgrading on your platform of choice.

    Whenever you upgrade Fusion, you must also update your remote connectors, if you are running any. Configure Remote V2 Connectors provides complete instructions for remote connector setup. You can download the latest files at V2 Connectors Downloads.

    Natively supported deployment upgrades

    Deployment type Platform

    Azure Kubernetes Service (AKS)

    aks

    Amazon Elastic Kubernetes Service (EKS)

    eks

    Google Kubernetes Engine (GKE)

    gke

    Fusion includes upgrade scripts for natively supported deployment types. To upgrade:

    1. Open the <platform>_<cluster>_<release>_upgrade_fusion.sh upgrade script file for editing.

    2. Update the CHART_VERSION to your target Fusion version, and save your changes.

    3. Run the <platform>_<cluster>_<release>_upgrade_fusion.sh script. The <release> value is the same as your namespace, unless you overrode the default value using the -r option.

    After running the upgrade, use kubectl get pods to see the changes applied to your cluster. It may take several minutes to perform the upgrade, as new Docker images are pulled from DockerHub. To see the versions of running pods, do:

    kubectl get po -o jsonpath='{..image}'  | tr -s '[[:space:]]' '\n' | sort | uniq

    Other Kubernetes deployment upgrades

    To update an existing installation, do:

    RELEASE=f5
    NAMESPACE=default
    helm repo update
    helm upgrade ${RELEASE} "lucidworks/fusion" --namespace "${NAMESPACE}" --values "${MY_VALUES}"

    Except for ZooKeeper, all K8s deployments and statefulsets use a RollingUpdate update policy:

      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate

    ZooKeeper instances use OnDelete to avoid changing critical stateful pods in the Fusion deployment. To apply changes to Zookeeper after performing the upgrade (uncommon), you need to manually delete the pods. For example:

    kubectl delete pod f5-zookeeper-0
    Delete one pod at a time. Verify the new pod is healthy and serving traffic, before deleting the next healthy pod.

    You can also set the updateStrategy under the zookeeper section in your "${MY_VALUES}" file:

    solr:
      ...
      zookeeper:
        updateStrategy:
          type: "RollingUpdate"

    Upgrades with Helm v3

    One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. For example, Fusion 5 allows customers to upgrade from Fusion 5.1.0 to a later 5.x.y version on a live cluster with zero downtime or disruption of service.

    When Kubernetes performs a rolling update to an individual microservice, there is a mix of old and new services in the cluster. Requests from other services route to both versions.

    Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same minor release version (5.x). We also ensure that the stored configuration remains compatible in the same minor release version.

    Lucidworks releases minor updates to individual services frequently. Pull in those upgrades using Helm at your discretion.

    How to upgrade Fusion
    1. Clone the fusion-cloud-native repo, if you haven’t already.

    2. Locate the setup_f5_<platform>.sh script that matches your Kubernetes platform.

    3. Run the script with the --upgrade option.

      To see what would be upgraded, pass the --dry-run option to the script.

    The scripts in the fusion-cloud-native repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.

    Helm upgrade script

    Once you deploy a working cluster, use the upgrade script created by the customize_fusion_values.sh script. The upgrade script hard-codes the parameters and eases the need to remember which parameters to pass to the script. This is helpful when working with multiple K8s clusters. Make sure you check the script into version control alongside your custom values YAML files.

    Whenever you change the custom values YAML files for your cluster, you need to run the upgrade script to apply the changes. The script calls helm upgrade with the correct parameters and --values options.

    If you run helm upgrade without passing the custom values YAML files, the deployment will revert to using chart defaults, which you never want to do.
    The script assumes your kubeconfig is pointing to the correct cluster and you’re using Heml v3. If not, the upgrade fails. Select the correct kubeconfig before running the script.