Skip to main content
The figure below provides a general overview on Fusion architecture and shows how Fusion services and data flow within a deployment. fusion services diagram
The image above is for illustrative purposes only. It depicts services available in Fusion 5.9, but your actual implementation may differ.

Architecture and requirements

LucidAcademyLucidworks offers free training to help you get started.The Learning Path for Implementing Fusion focuses on how to be successful when starting your implementation journey to avoid delays:
Implementing FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Deploy Fusion on Kubernetes

  • Deploy Fusion 5 on Google Kubernetes Engine (GKE)
  • Deploy Fusion 5 on Amazon Elastic Kubernetes Service (EKS)
  • Deploy Fusion 5 on Azure Kubernetes Service (AKS)
  • Deploy Fusion 5 on Other Kubernetes Platforms
  • Deploy Fusion at Scale
Fusion supports deployment on Google Kubernetes Engine (GKE). This topic explains how to deploy a Fusion cluster on GKE using the setup_f5_gke.sh script in the fusion-cloud-native repository.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.
All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.__
Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:
brew install kubernetes-helm
If you already have helm installed, make sure you’re using the latest version:
brew upgrade kubernetes-helm
For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl
To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:
k create namespace fusion-namespace
Apply the role.yaml and cluster-role.yaml files to that namespace
k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml
Then bind the rolebinding and clusterolebinding to the install user:
k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>
You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from GitHub

You should clone this repo from github as you’ll need to run the scripts on your local workstation:
git clone https://github.com/lucidworks/fusion-cloud-native.git
You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.
cd fusion-cloud-native
git pull
Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.
All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.__
Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:
brew install kubernetes-helm
If you already have helm installed, make sure you’re using the latest version:
brew upgrade kubernetes-helm
For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl
To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:
k create namespace fusion-namespace
Apply the role.yaml and cluster-role.yaml files to that namespace
k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml
Then bind the rolebinding and clusterolebinding to the install user:
k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>
You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from GitHub

You should clone this repo from github as you’ll need to run the scripts on your local workstation:
git clone https://github.com/lucidworks/fusion-cloud-native.git
You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.
cd fusion-cloud-native
git pull
Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_gke.sh setup_f5_gke.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and GKE, then you can skip the script and just use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>.

Set up the Google Cloud SDK (one time only)

If you’ve already installed the gcloud command-line tools, you can skip to <<cluster-create,Create a Fusion cluster in GKE>>.These steps set up your local Google Cloud SDK environment so that you’re ready to use the command-line tools to manage your Fusion deployment.Usually, you only need to perform these setup steps once. After that, you’re ready to create a cluster.For a nice getting started tutorial for GKE, see: https://cloud.google.com/kubernetes-engine/docs/deploy-app-clusterHow to set up the Google Cloud SDK:
  1. https://console.cloud.google.com/apis/library/container.googleapis.com?q=kubernetes%20engine
  2. Log in to Google Cloud: gcloud auth login
  3. Set up the Google Cloud SDK:
    1. gcloud config set compute/zone <zone-name> If you are working with regional clusters instead of zone clusters, use gcloud config set compute/region <region-name> instead.
    2. gcloud config set core/account <email address>
    3. New GKE projects only: gcloud projects create <new-project-name> If you have already created a project, for example in https://console.cloud.google.com/, then skip to the next step.
    4. gcloud config set project <project-name>
Make sure you install the Kubernetes command-line tool kubectl using:
gcloud components install kubectl
gcloud components update

Create a single-node demo cluster

Run the https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_gke.sh setup_f5_gke.sh script to install Fusion 5.x in a GKE cluster. To create a new, single-node demo cluster and install Fusion, simply do:
./setup_f5_gke.sh -c <cluster_name> -p <gcp_project_id> --create demo
Use the --help option to see script usage. If you want the script to create a cluster for you, then you need to pass the --create option with either demo or multi_az. If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c parameter.If you pass --create demo to the script, then we create a single node GKE cluster (defaults to using n1-standard-8 node type). The minimum node type you’ll need for a 1 node cluster is an n1-standard-8 (on GKE) which has 8 CPU and 30 GB of memory. This is cutting it very close in terms of resources as you also need to host all of the Kubernetes system pods on this same node. Obviously, this works for kicking the tires on Fusion 5.1 but is not sufficient for production workloads.You can change the instance type using the -i parameter; see: https://cloud.google.com/compute/docs/regions-zones/#available for a list of which machine types are available in your desired region.
If not provided the script generates a custom values file named gke_<cluster>_<namespace>_fusion_values.yaml which you can use to customize the Fusion chart.__
#WARNING# If using Helm V2, the setup_f5_gke.sh script installs Helm’s tiller component into your GKE cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.If you see an error similar to the following, then wait a few seconds and try running the setup_f5_gke.sh script again with the same arguments as this is usually a transient issue:
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
After running the setup_f5_gke.sh script, proceed to the <<verifying,Verifying the Fusion Installation>> section below.When you’re ready to deploy Fusion to a production-like environment, see more information at Fusion 5 Survival Guide.

Create a three-node regional cluster to withstand a zone outage

With a three-node regional cluster, nodes are deployed across three separate availability zones.
./setup_f5_gke.sh -c <cluster> -p <project> -n <namespace> --region <region-name> --create multi_az
  • <cluster> value should be the name of a non-existent cluster; the script will create the new cluster.
  • <project> must match the name of an existing project in GKE. Run gcloud config get-value project to get this value, or see the GKE setup instructions.
  • <namespace> Kubernetes namespace to install Fusion into, defaults to default with release f5
  • <region-name> value should be the name of a GKE region, defaults to us-west1. Run gcloud config get-value compute/zone to get this value, or see the GKE setup instructions to set the value.
In this configuration, Kubernetes deploys a ZooKeeper and Solr pod on each of the three nodes, which allows the cluster to retain ZK quorum and remain operational after losing one node, such as during an outage in one availability zone.When running in a multi-zone cluster, each Solr node has the solr_zone system property set to the zone it is running in, such as -Dsolr_zone=us-west1-a.After running the setup_f5_gke.sh script, proceed to the <<verifying,Verifying the Fusion Installation>> section below.When you’re ready to deploy Fusion to a production-like environment, see more information at Fusion 5 Survival Guide.

GKE Ingress and TLS

The Fusion proxy service provides authentication and serves as an API gateway for accessing all other Fusion services. It’s typical to use an Ingress for TLS termination in front of the proxy service.The setup_f5_gke.sh supports creating an Ingress with a TLS cert for a domain you own by passing: -t -h <hostname>After the script runs, you need to create an A record in GCP’s DNS service to map your domain name to the Ingress IP. Once this occurs, our script setup uses https://letsencrypt.org/ to issue a TLS cert for your Ingress.To see the status of the Let’s Encrypt issued certificate, do:
kubectl get managedcertificates -n <namespace> -o yaml
Please refer to the Kubernetes documentation on configuring an Ingress for GKE: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
The GCP Ingress defaults to a 30 second timeout, which can lead to false negatives for long running requests such as importing apps. To configure the timeout for the backend in kubernetes:
Create a BackendConfig object in your namespace:
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: backend_config_name
spec:
  timeoutSec: 120
  connectionDraining:
    drainingTimeoutSec: 60
Then make sure that the following entries are in the right place in your values.yaml file:
api-gateway:
  service:
    annotations:
      beta.cloud.google.com/backend-config: '{"ports": {"6764":"backend_config_name"}}'
and upgrade your release to apply the configuration changes

Ingresses and externalTrafficPolicy

When running a fusion cluster behind an externally controlled LoadBalancer it can be advantageous to configure the externalTrafficPolicy of the proxy service to Local. This preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading. Although when running in a cluster with a dedicated pool for spark jobs that can scale up and down freely it can prevent unwanted request failures. This behaviour can be altered with the api-gateway.service.externalTrafficPolicy value, which is set to Local if the example values file is used.You must use externalTrafficPolicy=Local for the Trusted HTTP Realm to work correctly.If you are already using a custom ‘values.yaml’ file, create an entry for externalTrafficPolicy under api-gateway service.
api-gateway:
  service:
    externalTrafficPolicy: Local

Considerations when using the nginx ingress controller

If you are using the nginx ingress controller to fulfil your ingress definitions there are a couple of options that are recommended to be set in the configmap:
enable-underscores-in-headers: "true"   # Fusion can return some headers that have underscores, these have to be explicitly enabled in nginx
proxy-body-size: "0"        # By default nginx places a maximum size on request bodies, either increase as needed or disable by setting to 0
proxy-read-timeout: "300"   # Increases the timeout for potential slow queries.

Custom values

There are some example values files that can be used as a starting point for resources, affinity and replica count configuration in the example-values folder. These can be passed to the install script using the --values option, for example:
./setup_f5_gke.sh -c <cluster> -p <project> -r <release> -n <namespace> \
  --values example-values/affinity.yaml --values example-values/resources.yaml --values example-values/replicas.yaml
The --values option can be passed multiple times, if the same configuration property is contained within multiple values files then the values from the latest file passed as a --values option are used.
Connectors custom values
If you are using Fusion 5.9 or later, you can specify resources and replica count per connector. This allows you to set different resource limits for each connector. If you do not set custom values for a connector, that connector uses the default values.Set each connector’s resource values in the connector-plugin section under pluginValues. The pluginValues section is a list of plugins and its resources. The following sample shows an example.
 pluginValues:
    - id: "plugin-id" <1>
     resources: <2>
       limits:
         cpu: "2"
         memory: "3Gi"
       requests:
         cpu: "250m"
         memory: "2Gi"
     replicaCount: 1 <3>
<1> The plugin ID. The plugin ID must match the plugin ID on the plugin ZIP file. without the lucidworks. prefix. For example, if the plugin ID on the plugin ZIP file is lucidworks.sharepoint-optimized, the plugin ID is sharepoint-optimized.<2> The resources settings. You may specify the limits, the requests, and the CPU and memory for each.<3> The number of replicas per connector. This value is 1 by default.
After editing the connector-plugin section, you must reinstall the affected connector.

Upgrades and Ingress

If you used the -t -h <hostname> options when installing your cluster, our script created an additional values yaml file named tls-values.yaml.
To make things easier for you when upgrading, you should add the settings from this file into your main custom values yaml file, e.g.:
api-gateway:
  service:
    type: "NodePort"
  ingress:
    enabled: true
    host: "<hostname>"
    tls:
      enabled: true
    annotations:
      "networking.gke.io/managed-certificates": "<RELEASE>-managed-certificate"
      "kubernetes.io/ingress.class": "gce"
This way you don’t have to remember to pass the additional tls-values.yaml file when upgrading.

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation.
Check if the Fusion Admin UI is available at \https://<fusion-host>:6764/admin/.
Let’s review some useful kubectl commands.

Enhance the K8s Command-line Experience

Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

Useful kubectl commands

kubectl reference: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commandsSet the namespace for kubectl if not using the default:
kubectl config set-context --current --namespace=<NAMESPACE>
This saves you from having to pass -n with every command.Get a list of running pods: k get podsGet logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipelineGet pod deployment spec and details: k get pods <pod_id> -o yamlGet details about a pod events: k describe po <pod_id>Port forward to a specific pod: k port-forward <pod_id> 8983:8983SSH into a pod: k exec -it <pod_id> -- /bin/bashCPU/Memory usage report for pods: k top podsForcefully kill a pod: k delete po <pod_id> --force --grace-period 0Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=NGet a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:
kubectl get pods
If all goes well, you should see a list of pods similar to:
NAME                                                        READY   STATUS    RESTARTS   AGE
seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
f5-classic-rest-service-0                                   1/1     Running   3          8m44s
f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
f5-logstash-0                                               1/1     Running   0          8m45s
f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
f5-solr-0                                                   1/1     Running   0          8m44s
f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
f5-zookeeper-0                                              1/1     Running   0          8m45s
The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.To see a list of Fusion services, do:
kubectl get svc
For an overview of the various Fusion 5 microservices, see: Fusion microservices.Once you’re ready to build a Fusion cluster for production, please see see more information at Fusion 5 Survival Guide.

Upgrading with Zero Downtime

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

Grafana Dashboards

Get the initial Grafana password from a K8s secret by doing:
kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace $ with your Helm release label):
kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000
You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards -> Manage to view the available dashboards

More deployment options

  • How to deploy Fusion 5 on Amazon Elastic Kubernetes Service
  • How to deploy Fusion 5 on Azure Kubernetes Service
  • How to deploy Fusion on 5 other Kubernetes platforms

Additional resources

LucidAcademyLucidworks offers free training to help you get started.The Course for Deploying Fusion 5 focuses on the prerequisite software needed to deploy Fusion, the necessary setup steps, and the physical act of deployment:
Deploying Fusion 5Play Button
Visit the LucidAcademy to see the full training catalog.
Fusion supports deployment on Amazon Elastic Kubernetes Service (EKS). This topic explains how to deploy a Fusion cluster on EKS using the setup_f5_eks.sh script in the fusion-cloud-native repository.In addition, this topic provides information about how to configure IAM roles for the service account.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.
All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.__
Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:
brew install kubernetes-helm
If you already have helm installed, make sure you’re using the latest version:
brew upgrade kubernetes-helm
For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl
To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:
k create namespace fusion-namespace
Apply the role.yaml and cluster-role.yaml files to that namespace
k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml
Then bind the rolebinding and clusterolebinding to the install user:
k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>
You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from GitHub

You should clone this repo from github as you’ll need to run the scripts on your local workstation:
git clone https://github.com/lucidworks/fusion-cloud-native.git
You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.
cd fusion-cloud-native
git pull
Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_eks.sh setup_f5_eks.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and EKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>.If you’re new to Amazon Web Services (AWS), then please visit the Amazon Web Services https://aws.amazon.com/getting-started/ to set up an account.If you’re new to Kubernetes and EKS, then we recommend going through Amazon’s https://eksworkshop.com/introduction/ before proceeding with Fusion.

Set up the AWS CLI tools

Before launching an EKS cluster, you need to install and configure kubectl, aws, eksctl, aws-iam-authenticator using the links provided below:Required AWS Command-line Tools:Run aws configure to configure a profile for authenticating to AWS. You’ll use the profile name you configure in this step, which defaults to default, as the -p argument to the setup_f5_eks.sh script in the next section.
When working in Ubuntu, avoid using the eksctl snap version. Alternative sources can have different versions that could cause command failures. Also, always make sure you are using the latest version for each one of the required tools.

Set up Fusion on EKS

To create a cluster in EKS the following IAM policies are required:
  • AmazonEC2FullAccess
  • AWSCloudFormationFullAccess
EKS Permissions:
eks:CreateCluster
eks:DeleteCluster
eks:DescribeCluster
eks:DescribeUpdate
eks:ListClusters
eks:ListUpdates
eks:UpdateClusterVersion
VPC Permissions:
ec2:AssociateSubnetCidrBlock
ec2:AssociateVpcCidrBlock
ec2:AttachInternetGateway
ec2:CreateInternetGateway
ec2:CreateSubnet
ec2:CreateVpc
ec2:CreateVpcEndpoint
ec2:DeleteInternetGateway
ec2:DeleteSubnet
ec2:DeleteVpc
ec2:DeleteVpcEndpoints
ec2:DescribeSubnets
ec2:DescribeVpcAttribute
ec2:DescribeVpcs
ec2:DetachInternetGateway
ec2:DisassociateSubnetCidrBlock
ec2:DisassociateVpcCidrBlock
ec2:ModifySubnetAttribute
ec2:ModifyVpcAttribute
ec2:ModifyVpcEndpoint
.IAM Permissions
iam:AddRoleToInstanceProfile
iam:AttachRolePolicy
iam:CreateInstanceProfile
iam:CreatePolicy
iam:CreatePolicyVersion
iam:CreateRole
iam:DeleteInstanceProfile
iam:DeletePolicy
iam:DeletePolicyVersion
iam:DeleteRole
iam:DeleteRolePolicy
iam:DetachRolePolicy
iam:GetInstanceProfile
iam:GetPolicy
iam:GetPolicyVersion
iam:GetRole
iam:GetRolePolicy
iam:ListInstanceProfiles
iam:ListInstanceProfilesForRole
iam:PassRole
iam:PutRolePolicy
iam:RemoveRoleFromInstanceProfile
iam:TagRole
iam:UntagRole
Download and run the https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_eks.sh setup_f5_eks.sh script to install Fusion 5.x in an EKS cluster.
This script does not support multiple node pools and should not be used for production clusters.
  • To create a new cluster and install Fusion, run the following command:
    ./setup_f5_eks.sh -c my-eks-cluster -p profile-name -n fusion-namespace --create demo 
    
    • Replace my-eks-cluster, profile-name, and fusion-namespace with your cluster, profile, and namespace values.
    • Pass the --create option with either demo or multi_az.
  • To use an existing cluster and install Fusion, run the following command:
    ./setup_f5_eks.sh -c cluster-name -p profile-name
    
    • Replace cluster-name with the name of the cluster you already created.
    • Replace profile-name with the name of your profile.
The profile is automatically set to default if you ran the AWS configure command without giving the profile a name.Use the --help option to see full script usage.
If using Helm V2, the setup_f5_eks.sh script installs Helm’s tiller component into your EKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.
The setup_f5_eks.sh script creates a service account that provides S3 read-only permissions to the created pods.
After running the setup_f5_eks.sh script, proceed to the <<verifying,Verifying the Fusion Installation>> section below.

EKS cluster overview

The EKS cluster is created using eksctl (https://eksctl.io/). By default it will setup the following resources in your AWS account:
  • A dedicated VPC for the EKS cluster in the specified region with CIDR: 192.168.0.0/16
  • 3 Public and 3 Private subnets within the created VPC, each with a /19 CIDR range, along with the corresponding route tables.
  • A NAT gateway in each Public subnet
  • An Auto Scaling Group of the instance type specified by the script, which defaults to m5.2xlarge, with 3 instances spanning the public subnets.
See https://eksctl.io/usage/vpc-networking/ for more information on the networking setup.

EKS Ingress

The setup_f5_eks.sh script exposes the Fusion proxy service on an external DNS name provided by an ELB over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service. See: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/Our EKS script creates a classic ELB for exposing fusion proxy service. In case you need to change this behavior and use https://github.com/kubernetes-sigs/aws-load-balancer-controller instead you can use the following parameters when running the setup_f5_eks.sh script:
--deploy-alb     # Tells the script to deploy an ALB
By default the kube-system namespace is being used for installing the aws-load-balancer-controller because pods priorityClassName is set to system-cluster-critical.In case you need to deploy an internal ALB you can use the --internal-alb option. This will create the nodes in the internal subnets. Fusion will be reachable from an AWS instance located in any of the external subnets on the same VPC. To use an ALB also an ingress with a DNS name is required, you can use the -h option to create an ingress with the required DNS name.Finally, use Route 53 or your DNS provider for creating an A ALIAS DNS record for your DNS name pointing to the ingress ADRESS. You can get the address listing the ingress using the command kubectl get ing.

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation.
Check if the Fusion Admin UI is available at \https://<fusion-host>:6764/admin/.
Let’s review some useful kubectl commands.

Enhance the K8s Command-line Experience

Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

Useful kubectl commands

kubectl reference: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commandsSet the namespace for kubectl if not using the default:
kubectl config set-context --current --namespace=<NAMESPACE>
This saves you from having to pass -n with every command.Get a list of running pods: k get podsGet logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipelineGet pod deployment spec and details: k get pods <pod_id> -o yamlGet details about a pod events: k describe po <pod_id>Port forward to a specific pod: k port-forward <pod_id> 8983:8983SSH into a pod: k exec -it <pod_id> -- /bin/bashCPU/Memory usage report for pods: k top podsForcefully kill a pod: k delete po <pod_id> --force --grace-period 0Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=NGet a list of pod versions: k get po -o jsonpath='{..image}' tr -s '[[:space:]]' '\n' sort uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:
kubectl get pods
If all goes well, you should see a list of pods similar to:
NAME                                                        READY   STATUS    RESTARTS   AGE
seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
f5-classic-rest-service-0                                   1/1     Running   3          8m44s
f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
f5-logstash-0                                               1/1     Running   0          8m45s
f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
f5-solr-0                                                   1/1     Running   0          8m44s
f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
f5-zookeeper-0                                              1/1     Running   0          8m45s
The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.To see a list of Fusion services, do:
kubectl get svc
For an overview of the various Fusion 5 microservices, see: Fusion microservices.Once you’re ready to build a Fusion cluster for production, please see see more information at Fusion 5 Survival Guide.

Upgrading with Zero Downtime

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

Grafana Dashboards

Get the initial Grafana password from a K8s secret by doing:
kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
  -o jsonpath="{.data.admin-password}"
base64 --decode ; echo
With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace $ with your Helm release label):
kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000
You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards -> Manage to view the available dashboards

Configure IAM roles for the service account

Configuring IAM roles lets you utilize the Amazon Web Services Security Token Service (AWS STS) for short-term authentication credentials to access services like the Amazon S3 simple storage service.To configure IAM roles, your user account must be granted admin permissions or IAM:FullAccess. Complete the following steps:
  1. To create the OpenID Connect (OIDC) provider, run the following command:
    eksctl utils associate-iam-oidc-provider --cluster cluster_name --approve
    
  2. To create an IAM role for the service account associated with the plugin-pod, run the following command:
    eksctl create iamserviceaccount --name f5-connector-plugin --namespace default --cluster cluster_name            --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess  --approve  --override-existing-serviceaccounts
    
    This command:
    • Creates an IAM role and attaches the target policy.
    • Updates the existing Kubernetes f5-connector-plugin service account and annotates it with the IAM role.
    • Uses the existing policy/AmazonS3ReadOnlyAccess policy.
If the IAM role was already created without the command, and you want to associate the service account, run the following command:
kubectl annotate serviceaccount -n default f5-connector-plugin eks.amazonaws.com/role-arn=arn:aws:iam::411271863668:role/FUS_ROLE --overwrite=true
To utilize this feature, create a data source with the settings in S3 Authentication Settings > AWS Instance Credentials Authentication Settings. For detailed installation information, AWS S3 V2 connector.
For more information, see:

More deployment options

  • How to deploy Fusion 5 in Google Kubernetes Engine
  • How to deploy Fusion 5 in Azure Kubernetes Service
  • How to deploy Fusion 5 on other Kubernetes platforms

Additional resources

LucidAcademyLucidworks offers free training to help you get started.The Course for Deploying Fusion 5 focuses on the prerequisite software needed to deploy Fusion, the necessary setup steps, and the physical act of deployment:
Deploying Fusion 5Play Button
Visit the LucidAcademy to see the full training catalog.
Fusion supports deployment on Azure Kubernetes Service (AKS). This topic explains how to deploy a Fusion cluster on AKS using the setup_f5_aks.sh script in the fusion-cloud-native repository.
The setup_f5_aks.sh script is the basic foundation for getting started and proof-of-concept purposes. For information about custom values in a production-ready environment, see Custom values YAML file.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.
All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.__
Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:
brew install kubernetes-helm
If you already have helm installed, make sure you’re using the latest version:
brew upgrade kubernetes-helm
For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl
To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:
k create namespace fusion-namespace
Apply the role.yaml and cluster-role.yaml files to that namespace
k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml
Then bind the rolebinding and clusterolebinding to the install user:
k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>
You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from GitHub

You should clone this repo from github as you’ll need to run the scripts on your local workstation:
git clone https://github.com/lucidworks/fusion-cloud-native.git
You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.
cd fusion-cloud-native
git pull
Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_aks.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and AKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>.If you’re new to Azure, then please visit https://azure.microsoft.com/en-us/free/search/ to set up an account.

Set up the AKS CLI tools

Before launching an AKS cluster, you need to install and configure kubectl and az using the links provided below:Required AKS Command-line Tools:To confirm your account access and command-line tools are set up correctly, run the az login command (az login –help to see available options).

Azure Prerequisites

To launch a cluster in AKS (or pretty much do anything with Azure) you need to setup a Resource Group. Resource Groups are a way of organizing and managing related resources in Azure. For more information about resource groups, see https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups.You also need to choose a location where you want to spin up your AKS cluster, such as westus2. For a list of locations you can choose, see https://azure.microsoft.com/en-us/global-infrastructure/locations/.Use the Azure console in your browser to create a resource group, or simply do:
az group create -g $AZURE_RESOURCE_GROUP -l $AZURE_LOCATION
To recap, you should have the following requirements in place:
  • Azure Account set up.
  • azure-cli (az) command-line tools installed.
  • az login working.
  • Created an Azure Resource Group and selected a location to launch the cluster.

Set up Fusion on AKS

Download and run the https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_aks.sh to install Fusion 5.x in a AKS cluster. To create a new cluster and install Fusion, simply do:
./setup_f5_aks.sh -c <cluster_name> -p <aks_resource_group>
If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c parameter.Use the --help option to see full script usage.By default, our script installs Fusion into the default namespace; think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.You can override the namespace using the -n option. In addition, our script uses f5 for the Helm release name; you can customize this using the -r option. Helm uses the release name you provide to track a specific instance of an installation, allowing you to perform updates and rollback changes for that specific release only.You can also pass the --preview option to the script, which enables soon-to-be-released features for AKS, such as deploying a multi-zone cluster across 3 availability zones for higher availability guarantees. For more information about the Availability Zone feature, see https://docs.microsoft.com/en-us/azure/aks/availability-zones.It takes a while for AKS to spin up the new cluster. The cluster will have three Standard_D4_v3 nodes which have 4 CPU cores and 16 GB of memory. Behind the scenes, our script calls the az aks create command.
If using Helm V2, the setup_f5_aks.sh script installs Helm’s tiller component into your AKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.
After running the setup_f5_aks.sh script, proceed to <<verifying,Verifying the Fusion Installation>>.

AKS Ingress

The setup_f5_aks.sh script exposes the Fusion proxy service on an external IP over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service.Use the -t and -h <hostname> options to have our script create an Ingress with a TLS certificate issued by Let’s Encrypt.

Upgrades and Ingress

IMPORTANT: If you used the -t -h <hostname> options when installing your cluster, our script created an additional values yaml file named tls-values.yaml.To make things easier for you when upgrading, you should add the settings from this file into your main custom values yaml file. For example:
api-gateway:
  service:
    type: "NodePort"
  ingress:
    enabled: true
    host: "<hostname>"
    tls:
      enabled: true
    annotations:
      "networking.gke.io/managed-certificates": "<RELEASE>-managed-certificate"
      "kubernetes.io/ingress.class": "gce"
This way, you don’t have to remember to pass the additional tls-values.yaml file when upgrading.

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation.
Check if the Fusion Admin UI is available at \https://<fusion-host>:6764/admin/.
Let’s review some useful kubectl commands.

Enhance the K8s Command-line Experience

Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

Useful kubectl commands

kubectl reference: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commandsSet the namespace for kubectl if not using the default:
kubectl config set-context --current --namespace=<NAMESPACE>
This saves you from having to pass -n with every command.Get a list of running pods: k get podsGet logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipelineGet pod deployment spec and details: k get pods <pod_id> -o yamlGet details about a pod events: k describe po <pod_id>Port forward to a specific pod: k port-forward <pod_id> 8983:8983SSH into a pod: k exec -it <pod_id> -- /bin/bashCPU/Memory usage report for pods: k top podsForcefully kill a pod: k delete po <pod_id> --force --grace-period 0Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=NGet a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:
kubectl get pods
If all goes well, you should see a list of pods similar to:
NAME                                                        READY   STATUS    RESTARTS   AGE
seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
f5-classic-rest-service-0                                   1/1     Running   3          8m44s
f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
f5-logstash-0                                               1/1     Running   0          8m45s
f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
f5-solr-0                                                   1/1     Running   0          8m44s
f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
f5-zookeeper-0                                              1/1     Running   0          8m45s
The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.To see a list of Fusion services, do:
kubectl get svc
For an overview of the various Fusion 5 microservices, see: Fusion microservices.Once you’re ready to build a Fusion cluster for production, please see see more information at Fusion 5 Survival Guide.

Upgrading with Zero Downtime

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

Grafana Dashboards

Get the initial Grafana password from a K8s secret by doing:
kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace $ with your Helm release label):
kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000
You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards -> Manage to view the available dashboards

More deployment options

  • How to deploy Fusion 5 in Google Kubernetes Engine
  • How to deploy Fusion 5 in Amazon Elastic Kubernetes Service
  • How to deploy Fusion on 5 other Kubernetes platforms

Frequently Asked Questions

Can the stateful database services, for example, MySQL, be supported by an Azure PaaS service?This option is not supported. In theory, it may be possible to implement this function.Is it possible to use cross-zone storage solutions rather than volumes, such as Microsoft Azure file storage, for stateful services?While in theory it may be possible to implement this functionality, the configuration has not been tested by Lucidworks and is not supported.

Additional resources

LucidAcademyLucidworks offers free training to help you get started.The Course for Deploying Fusion 5 focuses on the prerequisite software needed to deploy Fusion, the necessary setup steps, and the physical act of deployment:
Deploying Fusion 5Play Button
Visit the LucidAcademy to see the full training catalog.
The setup_f5_k8s.sh script in the fusion-cloud-native repository provides deployment support for any Kubernetes platform, including on-premise, private cloud, public cloud, and hybrid platforms.This script is used by the setup_f5_gke.sh, setup_f5_eks.sh, and setup_f5_aks.sh scripts, which provide additional platform-specific support for Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).This topic explains how to deploy a Fusion cluster in Kubernetes using the setup_f5_k8s.sh script in the fusion-cloud-native repository.If you’re deploying on-premises or using a localized repository, you’ll need to use a private repository for Docker images.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.
All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.__
Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:
brew install kubernetes-helm
If you already have helm installed, make sure you’re using the latest version:
brew upgrade kubernetes-helm
For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl
To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:
k create namespace fusion-namespace
Apply the role.yaml and cluster-role.yaml files to that namespace
k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml
Then bind the rolebinding and clusterolebinding to the install user:
k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>
You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from GitHub

You should clone this repo from github as you’ll need to run the scripts on your local workstation:
git clone https://github.com/lucidworks/fusion-cloud-native.git
You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.
cd fusion-cloud-native
git pull
Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.

Deployment

If you’re not running on a managed K8s platform like GKE, AKS, or EKS, you can use Helm to install the Fusion chart to an existing Kubernetes cluster.Fusion version 5.5 now includes support for the Rancher Kubernetes Engine (RKE) platform. Before deploying Fusion to RKE, you must download and install the latest RKE software. After configuring your cluster, you can proceed with the Helm v3 installation.
You must have a working cluster configured before performing the Helm v3 installation.

Use Helm v3 to Install Fusion

You should upgrade to the latest version of Helm v3 for working with Fusion. If you need to keep Helm V2 for other clusters, ensure Helm V3 is ahead of Helm V2 in your working shell’s PATH before proceeding.

Customize Fusion Chart Settings

Fusion aims to be well-configured out-of-the-box, but you can customize any of the built-in settings using a custom values YAML file. If you use one of our setup scripts, such as setup_f5_gke.sh, then it will create a custom values YAML file for you the first time you run it using https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.yaml.example as a template.If you’re working with Helm directly and not using one of our setup scripts, then run the https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.sh script to create a custom values YAML file from our https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.yaml.example template as a starting point:
./customize_fusion_values.sh  -c <cluster> -n <namespace> \
  --provider <provider> --num-solr 1 --node-pool "<node_pool>"
Pass --help for usage details.
In this example:
  • <provider> is the K8s platform you’re running on, such as gke
  • <cluster> is the name of your cluster
  • <namespace> is the K8s namespace where you plan to install Fusion
The --node-pool option specifies the node selector label for determining which nodes to run Fusion pods. You can pass "{}" to let Kubernetes decide which nodes to schedule pods on.
This file is referred to as ${MY_VALUES} in the commands belo. Replace the filename with the correct filename for your environment. Keep this file handy, as you’ll need it to customize Fusion settings and upgrade to a newer version.Review the settings in the custom values YAML file to ensure the defaults are appropriate for your environment, including the number of Solr and Zookeeper replicas.Add the Lucidworks Helm repo:
helm repo add lucidworks https://charts.lucidworks.com
The customize_fusion_values.sh script creates an upgrade script to install/upgrade Fusion into Kubernetes using Helm. Look in the directory where you ran customize_fusion_values.sh for a script named like: <provider>_<cluster>_<namespace>_upgrade_fusion.sh. Run this script to install Fusion.

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation.
Check if the Fusion Admin UI is available at \https://<fusion-host>:6764/admin/.
Let’s review some useful kubectl commands.

Enhance the K8s Command-line Experience

Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

Useful kubectl commands

kubectl reference: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commandsSet the namespace for kubectl if not using the default:
kubectl config set-context --current --namespace=<NAMESPACE>
This saves you from having to pass -n with every command.Get a list of running pods: k get podsGet logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipelineGet pod deployment spec and details: k get pods <pod_id> -o yamlGet details about a pod events: k describe po <pod_id>Port forward to a specific pod: k port-forward <pod_id> 8983:8983SSH into a pod: k exec -it <pod_id> -- /bin/bashCPU/Memory usage report for pods: k top podsForcefully kill a pod: k delete po <pod_id> --force --grace-period 0Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=NGet a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:
kubectl get pods
If all goes well, you should see a list of pods similar to:
NAME                                                        READY   STATUS    RESTARTS   AGE
seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
f5-classic-rest-service-0                                   1/1     Running   3          8m44s
f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
f5-logstash-0                                               1/1     Running   0          8m45s
f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
f5-solr-0                                                   1/1     Running   0          8m44s
f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
f5-zookeeper-0                                              1/1     Running   0          8m45s
The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.To see a list of Fusion services, do:
kubectl get svc
For an overview of the various Fusion 5 microservices, see: Fusion microservices.Once you’re ready to build a Fusion cluster for production, please see see more information at Fusion 5 Survival Guide.

Upgrading with Zero Downtime

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

Grafana Dashboards

Get the initial Grafana password from a K8s secret by doing:
kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace $ with your Helm release label):
kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000
You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards -> Manage to view the available dashboards

More deployment options

  • How to deploy Fusion 5 in Google Kubernetes Engine
  • How to deploy Fusion 5 in Amazon Elastic Kubernetes Service
  • How to deploy Fusion 5 in Azure Kubernetes Service

Additional resources

LucidAcademyLucidworks offers free training to help you get started.The Course for Deploying Fusion 5 focuses on the prerequisite software needed to deploy Fusion, the necessary setup steps, and the physical act of deployment:
Deploying Fusion 5Play Button
Visit the LucidAcademy to see the full training catalog.
Before you begin, see Fusion Server Deployment to understand the architecture and requirements.This article explains how to plan and execute a Fusion deployment at the scale required for staging or production.While the setup_f5_*.sh scripts are handy for getting started and proof-of-concept purposes, this article covers the planning process for building a production-ready environment.
LucidAcademyLucidworks offers free training to help you get started.The Course for Preparing for Fusion Implementation focuses on the key elements for a successful implementation, defining your business requirements, preparing clean data, and involving the right personnel:
$2Play Button
Visit the LucidAcademy to see the full training catalog.

Prerequisites

You must meet the following prerequisites before you can customize your Fusion cluster:
  • A local copy of the fusion-cloud-native repository. This must be up-to-date with the latest master branch.
  • Any cloud provider-specific command line tools, such as gcloud or aws, and kubectl.
    See the platform-specific instructions linked above, or check with your cloud provider.
  • Helm v3
    • To install on a Mac:
    brew upgrade kubernetes-helm
    
    • For other operating systems, download from Helm Releases.
    • Verify your installation:
    helm version --short
    v3.0.0+ge29ce2a
    
  • Kubernetes namespace
    • Collect the following information about your Kubernetes environment:
      • CLUSTER: Cluster name (passed to our setup scripts using the -c arg)
      • NAMESPACE: Kubernetes namespace where to install Fusion; a namespace should only contain lowercase letters (a-z), digits (0-9), or dash. No periods or underscores allowed.
  • (optional) Clarify your organization’s DockerHub policy. The Fusion Helm chart points to public Docker images on DockerHub. Your organization may not allow Kubernetes to pull images directly from DockerHub or may require extra security scanning before loading images into production clusters.
    Consult your Kubernetes and Docker admin team to find how to get the Fusion images loaded into a registry that’s accessible to your cluster. You can update the image for each service using the custom values YAML file.
Kubernetes namespace tips
  • Fusion 5 service discovery requires all services for the same release be deployed in the same namespace. Moreover, you should only run one instance of Fusion in a namespace. If you need multiple instances of Fusion running in the same Kubernetes cluster, then you need to deploy them in separate namespaces.
  • If your organization requires CPU / Memory quotas for namespaces, you can start with a minimum of 12 CPU and 45GB of RAM (such as 3 x n1-standard-4 on GKE), but you will need to increase the quotas once you start load testing Fusion with production workloads and real datasets.
  • Fusion requires at least 3 ZooKeeper nodes and 2 Solr nodes to achieve high availability.

Custom values YAML file

  1. Clone the fusion-cloud-native repository: git clone https://github.com/lucidworks/fusion-cloud-native
  2. Run the customize_fusion_values.sh script.
    ./customize_fusion_values.sh  --provider <provider> -c <cluster> -n <namespace> \
     --num-solr 3 \
     --solr-disk-gb 100 \
     --node-pool <node_selector> \
     --prometheus true \
     --with-resource-limits \
     --with-affinity-rules
    
    Pass the --help parameter to see script usage details.
    The script creates the following files:
    FileDescription
    <provider>_<cluster>_<namespace>_fusion_values.yamlMain custom values YAML used to override Helm chart defaults for Fusion microservices.
    <provider>_<cluster>_<namespace>_monitoring_values.yamlCustom values yaml used to configure Prometheus and Grafana.
    <provider>_<cluster>_<namespace>_fusion_resources.yamlResource requests and limits for all Microservices.
    <provider>_<cluster>_<namespace>_fusion_affinity.yamlPod affinity rules to ensure multiple replicas for a single service are evenly distributed across zones and nodes.
    <provider>_<cluster>_<namespace>_upgrade_fusion.shScript used to install and/or upgrade Fusion using the aforementioned custom values YAML files.
    For an explanation of these placeholder values, see Configuration Values below.
  3. Add the new files to version control. You will make changes to it over time as you fine-tune your Fusion installation. You will also need it to perform upgrades. If you try to upgrade your Fusion installation and don’t provide the custom values YAML, your deployment will revert to chart defaults.
    Review the <provider>_<cluster>_<release>_fusion_values.yaml file to familiarize yourself with its structure and contents. Notice it contains a separate section for each of the Fusion microservices. The example configuration of the query-pipeline service below illustrates some important concepts about the custom values YAML file.
    query-pipeline: 
    enabled: true ②
    nodeSelector: 
        cloud.google.com/gke-nodepool: default-pool
    javaToolOptions: "..."
    pod: 
        annotations:
        prometheus.io/port: "8787"
        prometheus.io/scrape: "true"
        prometheus.io/path: "/actuator/prometheus"
    
① Service-specific setting overrides under the top-level heading
② Every Fusion service has an implicit enabled flag that defaults to true, set to false to remove this service from your cluster
③ Node selector identifies the label find nodes to schedule pods on
④ Used to pass JVM options to the service
⑤ Pod annotations to allow Prometheus to scrape metrics from the service
Once we go through all of the configuration topics in this topic, you’ll have a well-configured custom values YAML file for your Fusion 5 installation. You’ll then use this file during the Helm v3 installation at the end of this topic.

Deployment-specific values

The script creates a custom values YAML file using the naming convention: <provider>_<cluster>_<namespace>_fusion_values.yaml. For example, gke_search_f5_fusion_values.yaml.
ParameterDescription
<provider>The K8s platform you’re running on, such as gke.
<cluster>The name of your cluster.
<namespace>The K8s namespace where you want to install Fusion.
<node_selector>Specifies a nodeSelector label to find nodes to schedule Fusion pods on.
Providing the correct --node-pool <node_selector> label is very important. Using the wrong value will cause your pods to be stuck in the pending state. If you’re not sure about the correct value for your cluster, pass ’’` to let Kubernetes decide which nodes to schedule Fusion pods on.
Default nodeSelector labels are provider-specific. The fusion-cloud-native scripts use the following defaults for GKE and EKS:
ProviderDefault node selector
GKEcloud.google.com/gke-nodepool: default-pool
EKSalpha.eksctl.io/nodegroup-name: standard-workers
If you are deploying Fusion 5.9.12, add the following to your values.yaml file to avoid a known issue that prevents the kuberay-operator pod from launching successfully: yaml kuberay-operator: crd: create: true

Flags

The script provides flags for additional configuration:
FlagDescription
--node-poolAdd a Fusion specific label to your nodes.
--with-resource-limitsConfigure resource requests/limits.
--with-replicasConfigure replica counts.
--with-affinity-rulesConfigure pod affinity rules for Fusion services.
Use --node-pool to add a Fusion specific label to your nodes by doing:
kubectl label <NODE_ID> fusion_node_type=<NODE_LABEL>
Then, pass --node-pool 'fusion_node_type: <NODE_LABEL>'.

Configure Solr sizing

When you’re ready to build a production-ready setup for Fusion 5, you need to customize the Fusion Helm chart to ensure Fusion is well-configured for production workloads.You’ll be able to scale the number of nodes for Solr up and down after building the cluster, but you need to establish the initial size of the nodes (memory and CPU) and the size and type of disks you need.See the example config below to learn which parameters to change in the custom values YAML file.
solr:
  resources:                    # Set resource limits for Solr to help K8s pod scheduling;
    limits:                     # these limits are not just for the Solr process in the pod,
      cpu: "7700m"              # so allow ample memory for loading index files into the OS cache (mmap)
      memory: "26Gi"
    requests:
      cpu: "7000m"
      memory: "25Gi"
  logLevel: WARN
  nodeSelector:
    fusion_node_type: search    # Run this Solr StatefulSet in the "search" node pool
  exporter:
    enabled: true               # Enable the Solr metrics exporter (for Prometheus) and
                                # schedule on the default node pool (system partition)
    podAnnotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9983"
      prometheus.io/path: "/metrics"
    nodeSelector:
      cloud.google.com/gke-nodepool: default-pool
  image:
    tag: 8.4.1
  updateStrategy:
    type: "RollingUpdate"
  javaMem: "-Xmx3g -Dfusion_node_type=system" # Configure memory settings for Solr
  solrGcTune: "-XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+UseStringDeduplication -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=150 -XX:+UseLargePages -XX:+AlwaysPreTouch"
  volumeClaimTemplates:
    storageSize: "100Gi"        # Size of the Solr disk
  replicaCount: 6               # Number of Solr pods to run in this StatefulSet

zookeeper:
  nodeSelector:
    cloud.google.com/gke-nodepool: default-pool
  replicaCount: 3               # Number of Zookeepers
  persistence:
    size: 20Gi
  resources: {}
  env:
    ZK_HEAP_SIZE: 1G
    ZOO_AUTOPURGE_PURGEINTERVAL: 1
To be clear, you can tune GC settings and number of replicas after the cluster is built. But changing the size of the persistent volumes is more complicated so you should try to pick a good size initially.

Configure storage class for Solr pods (optional)

If you wish to run with a storage class other than the default you can create a storage class for your Solr pods before you install. For example, to create regional disks in GCP you can create a file called storageClass.yaml with the following contents:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: solr-gke-storage-regional
provisioner: kubernetes.io/gce-pd
parameters:
 type: pd-standard
 replication-type: regional-pd
 zones: us-west1-b, us-west1-c
and then provision into your cluster by calling:
kubectl apply -f storageClass.yaml
to then have Solr use the storage class by adding the following to the custom values YAML:
solr:
  volumeClaimTemplates:
    storageClassName: solr-gke-storage-regional
    storageSize: 250Gi
We’re not advocating that you must use regional disks for Solr storage, as that would be redundant with Solr replication. We’re just using this as an example of how to configure a custom storage class for Solr disks if you see the need. For instance, you could use regional disks without Solr replication for write-heavy type collections.

Configure multiple node pools

Lucidworks recommends isolating search workloads from analytics workloads using multiple node pools. The included scripts do not do this for you; this is a manual process.See the example script for GKE, see create_gke_cluster_node_pools.sh.In the custom values YAML file, you can add additional Solr StatefulSets by adding their names to the list under the nodePools property. If any property for that statefulset needs to be changed from the default set of values, then it can be set directly on the object representing the node pool, any properties that are omitted are defaulted to the base value. See the following example (additional whitespace added for display purposes only):
solr:
  nodePools:
    - name: ""
    - name: "analytics"
      javaMem: "-Xmx6g"
      replicaCount: 6
      storageSize: "100Gi"
      nodeSelector:
        fusion_node_type: analytics ③
      resources:
        requests:
          cpu: 2
          memory: 12Gi
        limits:
          cpu: 3
          memory: 12Gi
    - name: "search"
      javaMem: "-Xms11g -Xmx11g"
      replicaCount: 12
      storageSize: "50Gi"
      nodeSelector:
        fusion_node_type: search ⑤
      resources:
        limits:
          cpu: "7700m"
          memory: "26Gi"
        requests:
          cpu: "7000m"
          memory: "25Gi"
  nodeSelector:
    cloud.google.com/gke-nodepool: default-pool ⑥
...
① The empty string "" is the suffix for the default partition.
② Overrides the settings for the analytics Solr pods.
③ Assigns the analytics Solr pods to the node pool and attaches the label fusion_node_type=analytics. You can use the fusion_node_type property in Solr auto-scaling policies to govern replica placement during collection creation.
④ Overrides the settings for the search Solr pods.
⑤ Assigns the search Solr pods to the node pool and attaches the label fusion_node_type=search.
⑥ Sets the default settings for all Solr pods, if not specifically overridden in the nodePools section above.
Do not edit the nodePools value "".
In the example above, the analytics partition replicaCount, or number of Solr pods, is six. The search partition replicaCount is twelve.Each nodePool is automatically be assigned the -Dfusion_node_type property of <search>, <system>, or <analytics>. This value matches the name of the nodePool. For example, -Dfusion_node_type=<search>.The Solr pods have a fusion_node_type system property, as shown below:fusion_node_type system property

Solr auto-scaling policy

Use replica placement plugins to control how replicas are placed in Solr.

Pod network policy

A Kubernetes network policy governs how groups of pods communicate with each other and other network endpoints. With Fusion, all incoming traffic flows through the API Gateway service. All Fusion services in the same namespace expect an internal JWT, which is supplied by the Gateway, as part of the request. As a result, Fusion services enforce a basic level of API security and don’t need an additional network policy to protect them from other pods in the cluster.To install the network policy for Fusion services, pass --set global.networkPolicyEnabled=true when installing the Fusion Helm chart.

On-premises private Docker registries

For on-premises Kubernetes deployments, your organization may not allow Kubernetes to pull Fusion’s Docker images from DockerHub. See the instructions below for details on using a private Docker registry with Fusion. These are general instructions that may need to be adapted to work within your organization’s security policies:
  1. Transfer the public images from DockerHub to your private Docker registry.
  2. Establish a workstation that has access to DockerHub. This workstation must connect to your internal Docker registry, most likely via VPN connection. In this example, the workstation is referred to as envoy.
  3. Install Docker on envoy. You need at least 100GB of free disk for Docker.
  4. Pull all of the images from DockerHub to envoy’s local registry. For example, to pull the query pipeline image, run docker pull lucidworks/query-pipeline:5.9.0. See docker pull --help for more information about pulling Docker images.
  5. Establish a connection from envoy to the private Docker registry, most likely via a VPN connection. In this example, the private Docker registry is referred to as <internal-private-registry>.
  6. Push the images from envoy’s Docker registry to the private registry. This will take a long time.
    1. You’ll need to re-tag all images for the internal registry. For example, to tag the query-pipeline image, run:
    docker tag lucidworks/query-pipeline:5.9.0 <internal-private-registry>/query-pipeline:5.9.0
    
    1. Push each image to the internal repo:
    docker push <internal-private-registry>/query-pipeline:5.9.0
    
  7. Install the Docker registry secret in Kubernetes. Create the Docker registry secret in the Kubernetes namespace where you want to install Fusion:
    SECRET_NAME=<internal-private-secret>
    REPO=<internal-private-registry>
    
    kubectl create secret docker-registry "${SECRET_NAME}" \
     --namespace "${NAMESPACE}" \
     --docker-server="${REPO}" \
     --docker-username=${REPO_USER} \
     --docker-password=${REPO_PASS} \
     --docker-email=${REPO_USER}
    
    For details, see the Kubernetes article Pull an Image from a Private Registry.
  8. Update the custom values YAML for your cluster to point to your private registry and secret to allow Kubernetes to pull images. For example:
    query-pipeline:
     image:
       imagePullSecrets:
         - name: <internal-private-secret>
       repository: <internal-private-registry>
    
    Repeat the process for all Fusion services.

Customize Helm Chart

Every Fusion service allows you to override the imagePullSecrets setting using custom values YAML. However, other 3rd party services—including Zookeeper, Pulsar, Prometheus, and Grafana—don’t allow you to supply the pull secret using the custom values YAML.To patch the default service account for your namespace and add the pull secret, run the following:
kubectl patch sa default -n $NAMESPACE \
  -p '"imagePullSecrets": [{"name": "<internal-private-secret>" }]'
In Windows using PowerShell or another CLI, you might have to escape the double quotes with a backslash (\) or reverse the order of single and double quotes:
kubectl patch sa default -n $NAMESPACE \
  -p "'imagePullSecrets': [{'name': '<internal-private-secret>'}]"
Replace <internal-private-secret> with the name of the secret you created in the steps above.
This allows the default service account to pull images from the private registry without specifying the pull secret on the resources directly.

Add additional trusted certificate(s) to Fusion’s indexing and querying services (optional)

You can add custom trusted certificates to support Fusion’s indexing and querying services. You may want to use custom trusted certificates if, for example, you have specific security requirements for data handling or need to support an existing infrastructure and its security needs. This method involves updating your Helm chart.If you want to add custom trusted certificates for both the indexing and querying services, follow these instructions twice: once for the indexing service, and once for the querying service. To add different certificates to the indexing and querying services, create one YAML file with the indexing service certificates and one YAML file for the querying service certificates before following these instructions.
You may use the same YAML file if you want to use the same certificates for both services.
To add custom trusted certificates:
  1. Create a new YAML file for your custom trusted certificates. The customcerts.yaml file is the example file in these instructions.
  2. Add the custom certificate(s) in the YAML file created in the previous step. For example:
    trustedCertificates:
     enabled: true
     files:
       some.cert: |-
         -----BEGIN CERTIFICATE-----
         MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
         (...)
         EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
         -----END CERTIFICATE-----
       other.cert: |-
         -----BEGIN CERTIFICATE-----
         MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
         (...)
         EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
         -----END CERTIFICATE---------
    
  3. Update the indexing or querying service by running the following Helm command. Replace EXAMPLE-VALUES-FILE.yaml with your previous values file.
    helm upgrade --install --namespace ${EXAMPLE-NAMESPACE} ${HELM-RELEASE} ${HELM-CHART-PATH} --values EXAMPLE-VALUES-FILE.yaml --values customcerts.yaml
    
  4. Verify the indexing or querying pod has a new init-container with the name import-certs.

Add additional trusted certificate(s) for connectors to allow crawling of web resources with SSL/TLS enabled (optional)

To crawl a datasource which for some reason is using a self-signed certificate, add arbitrary certificates to connectors. For example:
classic-rest-service:
  trustedCertificates:
    enabled: true
    files:
      some.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE-----
      other.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE---------
connector-plugin:
  trustedCertificates:
    enabled: true
    files:
      some.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE-----
      other.cert: |-
        -----BEGIN CERTIFICATE-----
        MIIDeTCCAmGgAwIBAgIJAPziuikCTox4MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
        (...)
        EVA0pmzIzgBg+JIe3PdRy27T0asgQW/F4TY61Yk=
        -----END CERTIFICATE---------

Generating the certificate on linux command line

Use the following command to generate a .crt file in $fusion_home/apps/jetty/connectors/etc/yourcertname.crt:
openssl s_client -servername remote.server.net -connect remote.server.net:443 </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >$fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using Firefox web browser

  1. Navigate to the SharePoint host.
  2. Click the
    in the address bar, then click the
    icon.
  3. Next, navigate to More Information > View Certificate > Export.
    Save the file to the following folder: $fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using Chrome web browser

  1. Navigate to Chrome menu > More Tools > Developer Tools > Security Tab. This will display the Security overview.
  2. Click the View certificate button.
  3. Save the file to the following folder:
$fusion_home/apps/jetty/connectors/etc/yourcertname.crt

Generating the certificate using powershell

Use the following script to generate a .crt`` file in $fusion_home\apps\jetty\connectors\etc\yourcertname.crt“:
$fusion_home = c:\your\fusion\install\directory
$webRequest = [Net.WebRequest]::Create("https://your-hostname")
try { $webRequest.GetResponse() } catch {}
$cert = $webRequest.ServicePoint.Certificate
$bytes = $cert.Export([Security.Cryptography.X509Certificates.X509ContentType]::Cert)
set-content -value $bytes -encoding byte -path "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt"
certutil -encode "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt" "$fusion_home\apps\jetty\connectors\etc\yourcertname.crt"
rm "$fusion_home\apps\jetty\connectors\etc\yourcertname.binary.crt" -f

Install Fusion 5 on Kubernetes

At this point, you’re ready to install Fusion 5 using the custom values YAML files and upgrade script. If you used the customize_fusion_values.sh script, run it using BASH:
./gke_search_f5_upgrade_fusion.sh
Once the installation is complete, verify your Fusion installation is running correctly.

Monitoring Fusion with Prometheus and Grafana

Lucidworks recommends using Prometheus and Grafana for monitoring the performance and health of your Fusion cluster. Your operations team may already have these services installed. If not, install them into the Fusion namespace.
The Custom values YAML file shown above activates the Solr metrics exporter service and adds pod annotations so Prometheus can scrape metrics from Fusion services.
  1. Run the customize_fusion_values.sh script with the --prometheus true option. This creates an extra custom values YAML file for installing Prometheus and Grafana, <provider>_<cluster>_<namespace>_monitoring_values.yaml. For example: gke_search_f5_monitoring_values.yaml.
  2. Commit the YAML file to version control.
  3. Review its contents to ensure that the settings suit your needs. For example, decide how long you want to keep metrics. The default is 36 hours.
    See the Prometheus documentation and Grafana documentation for details.
  4. Run the install_prom.sh script to install Prometheus & Grafana in your namespace. Include the provider, cluster name, namespace, and helm release as in the example below:
    ./install_prom.sh --provider gke -c search -n f5 -r 5-5-1
    
    Pass the --help parameter to see script usage details.
    The Grafana dashboards from monitoring/grafana are installed automatically by the install_prom.sh script.

Upgrade Fusion on Kubernetes

Upgrade information about Kubernetes is included in the Fusion 5 Upgrades topic.
This guide describes how to perform Fusion 5 upgrades.
Before upgrading, be aware of changes by checking for Deprecations and Removals between versions.
Lucidworks recommends upgrading to the next minor version only. For example, you should upgrade from Fusion 5.6.1 to Fusion 5.7.1 before upgrading to Fusion 5.8.0.The general upgrade process is described in this article. However, the specific upgrade procedures may vary depending on your upgrade path. For the most accurate instructions, please refer to the upgrade article specific to your upgrade.

General upgrade process

Fusion natively supports deployments on supported Kubernetes platforms, including AKS, EKS, and GKE.Fusion includes an upgrade script for AKS, EKS, and GKE. This script is not generated for other Kubernetes deployments.Upgrades differ from platform to platform. See below for more information about upgrading on your platform of choice.Whenever you upgrade Fusion, you must also update your remote connectors, if you are running any. You can download the latest files at V2 Connectors Downloads.

Natively supported deployment upgrades

Deployment typePlatform
Azure Kubernetes Service (AKS)aks
Amazon Elastic Kubernetes Service (EKS)eks
Google Kubernetes Engine (GKE)gke
Fusion includes upgrade scripts for natively supported deployment types. To upgrade:
  1. Open the <platform>_<cluster>_<release>_upgrade_fusion.sh upgrade script file for editing.
  2. Update the CHART_VERSION to your target Fusion version, and save your changes.
  3. Run the <platform>_<cluster>_<release>_upgrade_fusion.sh script. The <release> value is the same as your namespace, unless you overrode the default value using the -r option.
After running the upgrade, use kubectl get pods to see the changes applied to your cluster. It may take several minutes to perform the upgrade, as new Docker images are pulled from DockerHub. To see the versions of running pods, do:
kubectl get po -o jsonpath='{..image}'  | tr -s '[[:space:]]' '\n' | sort | uniq

Other Kubernetes deployment upgrades

To update an existing installation, do:
RELEASE=f5
NAMESPACE=default
helm repo update
helm upgrade ${RELEASE} "lucidworks/fusion" --namespace "${NAMESPACE}" --values "${MY_VALUES}"
Except for ZooKeeper, all K8s deployments and statefulsets use a RollingUpdate update policy:
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
ZooKeeper instances use OnDelete to avoid changing critical stateful pods in the Fusion deployment. To apply changes to Zookeeper after performing the upgrade (uncommon), you need to manually delete the pods. For example:
kubectl delete pod f5-zookeeper-0
Delete one pod at a time. Verify the new pod is healthy and serving traffic, before deleting the next healthy pod.
You can also set the updateStrategy under the zookeeper section in your "${MY_VALUES}" file:
solr:
  ...  
    zookeeper:
    updateStrategy:
      type: "RollingUpdate"

Upgrades with Helm v3

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. For example, Fusion 5 allows customers to upgrade from Fusion 5.1.0 to a later 5.x.y version on a live cluster with zero downtime or disruption of service.When Kubernetes performs a rolling update to an individual microservice, there is a mix of old and new services in the cluster. Requests from other services route to both versions.
Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same minor release version (5.x). We also ensure that the stored configuration remains compatible in the same minor release version.
Lucidworks releases minor updates to individual services frequently. Pull in those upgrades using Helm at your discretion.How to upgrade Fusion
  1. Clone the fusion-cloud-native repo, if you haven’t already.
  2. Locate the setup_f5_<platform>.sh script that matches your Kubernetes platform.
  3. Run the script with the --upgrade option.
    To see what would be upgraded, pass the --dry-run option to the script.
The scripts in the fusion-cloud-native repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.

Helm upgrade script

Once you deploy a working cluster, use the upgrade script created by the customize_fusion_values.sh script. The upgrade script hard-codes the parameters and eases the need to remember which parameters to pass to the script. This is helpful when working with multiple K8s clusters. Make sure you check the script into version control alongside your custom values YAML files.Whenever you change the custom values YAML files for your cluster, you need to run the upgrade script to apply the changes. The script calls helm upgrade with the correct parameters and --values options.
If you run helm upgrade without passing the custom values YAML files, the deployment will revert to using chart defaults, which you never want to do.
The script assumes your kubeconfig is pointing to the correct cluster and you’re using Heml v3. If not, the upgrade fails. Select the correct kubeconfig before running the script.
I