Product Selector

Fusion 5.11
    Fusion 5.11

    Deploy Fusion 5 on Azure Kubernetes Service (AKS)

    Fusion supports deployment on Azure Kubernetes Service (AKS). This topic explains how to deploy a Fusion cluster on AKS using the setup_f5_aks.sh script in the fusion-cloud-native repository.

    The setup_f5_aks.sh script is the basic foundation for getting started and proof-of-concept purposes. For information about custom values in a production-ready environment, see Custom values YAML file.

    Prerequisites

    This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

    Release Name and Namespace

    Before installing Fusion, you need to choose a Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.

    NOTE: All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.

    Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

    Install Helm

    Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:

    brew install kubernetes-helm

    If you already have helm installed, make sure you’re using the latest version:

    brew upgrade kubernetes-helm

    For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/

    The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

    Helm User Permissions

    If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.

    When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
    alias k=kubectl

    To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:

    k create namespace fusion-namespace

    Apply the role.yaml and cluster-role.yaml files to that namespace

    k apply -f cluster-role.yaml
    k config set-context --current --namespace=$NAMESPACE
    k apply -f role.yaml

    Then bind the rolebinding and clusterolebinding to the install user:

    k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
    k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>

    You will then be able to run the helm install command as the <install_user>

    Clone fusion-cloud-native from Github

    You should clone this repo from github as you’ll need to run the scripts on your local workstation:

    git clone https://github.com/lucidworks/fusion-cloud-native.git

    You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.

    cd fusion-cloud-native
    git pull

    Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.

    The setup_f5_aks.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and AKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described here.

    If you’re new to Azure, then please visit https://azure.microsoft.com/en-us/free/search/ to set up an account.

    Set up the AKS CLI tools

    Before launching an AKS cluster, you need to install and configure kubectl and az using the links provided below:

    Required AKS Command-line Tools:
    1. kubectl: Install kubectl

    2. az: Installing the Azure CLI

    To confirm your account access and command-line tools are set up correctly, run the az login command (az login –help to see available options).

    Azure Prerequisites

    To launch a cluster in AKS (or pretty much do anything with Azure) you need to setup a Resource Group. Resource Groups are a way of organizing and managing related resources in Azure. For more information about resource groups, see https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups.

    You also need to choose a location where you want to spin up your AKS cluster, such as westus2. For a list of locations you can choose, see https://azure.microsoft.com/en-us/global-infrastructure/locations/.

    Use the Azure console in your browser to create a resource group, or simply do:

    az group create -g $AZURE_RESOURCE_GROUP -l $AZURE_LOCATION
    To recap, you should have the following requirements in place:
    1. Azure Account set up.

    2. azure-cli (az) command-line tools installed.

    3. az login working.

    4. Created an Azure Resource Group and selected a location to launch the cluster.

    Set up Fusion on AKS

    Download and run the setup_f5_aks.sh script to install Fusion 5.x in a AKS cluster. To create a new cluster and install Fusion, simply do:

    ./setup_f5_aks.sh -c <cluster_name> -p <aks_resource_group>

    If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c parameter.

    Use the --help option to see full script usage.

    By default, our script installs Fusion into the default namespace; think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.

    You can override the namespace using the -n option. In addition, our script uses f5 for the Helm release name; you can customize this using the -r option. Helm uses the release name you provide to track a specific instance of an installation, allowing you to perform updates and rollback changes for that specific release only.

    You can also pass the --preview option to the script, which enables soon-to-be-released features for AKS, such as deploying a multi-zone cluster across 3 availability zones for higher availability guarantees. For more information about the Availability Zone feature, see https://docs.microsoft.com/en-us/azure/aks/availability-zones.

    It takes a while for AKS to spin up the new cluster. The cluster will have three Standard_D4_v3 nodes which have 4 CPU cores and 16 GB of memory. Behind the scenes, our script calls the az aks create command.

    If using Helm V2, the setup_f5_aks.sh script installs Helm’s tiller component into your AKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.

    After running the setup_f5_aks.sh script, proceed to Verifying the Fusion Installation.

    AKS Ingress

    The setup_f5_aks.sh script exposes the Fusion proxy service on an external IP over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service.

    Use the -t and -h <hostname> options to have our script create an Ingress with a TLS certificate issued by Let’s Encrypt.

    Upgrades and Ingress

    If you used the -t -h <hostname> options when installing your cluster, our script created an additional values yaml file named tls-values.yaml.

    To make things easier for you when upgrading, you should add the settings from this file into your main custom values yaml file. For example:

    api-gateway:
      service:
        type: "NodePort"
      ingress:
        enabled: true
        host: "<hostname>"
        tls:
          enabled: true
        annotations:
          "networking.gke.io/managed-certificates": "<RELEASE>-managed-certificate"
          "kubernetes.io/ingress.class": "gce"

    This way, you don’t have to remember to pass the additional tls-values.yaml file when upgrading.

    Verifying the Fusion Installation

    In this section, we provide some tips on how to verify the Fusion installation.

    Check if the Fusion Admin UI is available at https://<fusion-host>:6764/admin/.

    Let’s review some useful kubectl commands.

    Enhance the K8s Command-line Experience

    Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

    Useful kubectl commands

    Set the namespace for kubectl if not using the default:

    kubectl config set-context --current --namespace=<NAMESPACE>

    This saves you from having to pass -n with every command.

    Get a list of running pods: k get pods

    Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline

    Get pod deployment spec and details: k get pods <pod_id> -o yaml

    Get details about a pod events: k describe po <pod_id>

    Port forward to a specific pod: k port-forward <pod_id> 8983:8983

    SSH into a pod: k exec -it <pod_id> — /bin/bash

    CPU/Memory usage report for pods: k top pods

    Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0

    Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N

    Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '' '\n' | sort | uniq

    Check Fusion Pods and Services

    Once the install script completes, you can check that all pods and services are available using:

    kubectl get pods

    If all goes well, you should see a list of pods similar to:

    NAME                                                        READY   STATUS    RESTARTS   AGE
    seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
    f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
    f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
    f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
    f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
    f5-classic-rest-service-0                                   1/1     Running   3          8m44s
    f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
    f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
    f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
    f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
    f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
    f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
    f5-logstash-0                                               1/1     Running   0          8m45s
    f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
    f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
    f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
    f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
    f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
    f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
    f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
    f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
    f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
    f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
    f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
    f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
    f5-solr-0                                                   1/1     Running   0          8m44s
    f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
    f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
    f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
    f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
    f5-zookeeper-0                                              1/1     Running   0          8m45s

    The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.

    To see a list of Fusion services, do:

    kubectl get svc

    For an overview of the various Fusion 5 microservices, see: https://doc.lucidworks.com/fusion/5.3/149/fusion-microservices

    Once you’re ready to build a Fusion cluster for production, please see the Fusion 5 Survival Guide in this repo.

    Upgrading with Zero Downtime

    One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.

    When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.

    Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.

    To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.

    The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

    Grafana Dashboards

    Get the initial Grafana password from a K8s secret by doing:

    kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
      -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

    With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace ${RELEASE} with your Helm release label):

    kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000

    You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.

    This will log you into the application. It is recommended that you create another administrative user with a more desirable password.

    The dashboards and datasoure will be setup for you in grafana, simply navigate to DashboardsManage to view the vailable dashboards

    Frequently Asked Questions

    Can the stateful database services, for example, MySQL, be supported by an Azure PaaS service?

    This option is not supported. In theory, it may be possible to implement this function.

    Is it possible to use cross-zone storage solutions rather than volumes, such as Microsoft Azure file storage, for stateful services?

    While in theory it may be possible to implement this functionality, the configuration has not been tested by Lucidworks and is not supported.