How to Deploy Fusion on Google Kubernetes Engine (GKE)

Fusion supports deployment on Google Kubernetes Engine (GKE). This topic explains how to deploy a Fusion cluster on GKE using the setup_f5_gke.sh script in the fusion-cloud-native repository.

The setup_f5_gke.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and GKE, then you can skip the script and just use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described here.

If you’re new to Google Cloud Platform (GCP), then you need an account on Google Cloud Platform before you can begin deploying Fusion on GKE.

Set up the Google Cloud SDK (one time only)

If you’ve already installed the gcloud command-line tools, you can skip to Create a Fusion cluster in GKE.

These steps set up your local Google Cloud SDK environment so that you’re ready to use the command-line tools to manage your Fusion deployment.

Usually, you only need to perform these setup steps once per local session. After that, you’re ready to create a cluster.

How to set up the Google Cloud SDK
  1. Enable the Kubernetes Engine API.

  2. Log in to Google Cloud: gcloud auth login

  3. Set up the Google Cloud SDK:

    1. gcloud config set compute/zone <zone-name>

      If you are working with regional clusters instead of zone clusters, use gcloud config set compute/region <region-name> instead.

    2. gcloud config set core/account <email address>

    3. New GKE projects only: gcloud projects create <new-project-name>

      If you have already created a project, for example in the Google Cloud Platform console, then skip to the next step.

    4. gcloud config set project <project-name>

Make sure you install the Kubernetes command-line tool kubectl using:

gcloud components install kubectl
gcloud components update

Set up Fusion on GKE

Download and run the setup_f5_gke.sh script to install Fusion 5.x in a GKE cluster. To create a new cluster and install Fusion, simply do:

./setup_f5_gke.sh -c <cluster_name> -p <gcp_project_id> -r <release> -n <namespace>

Use the --help option to see script usage. If you want the script to create a cluster for you (the default behavior), then you need to pass the --create option with either demo or multi_az. If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c parameter.

If you pass --create demo to the script, then we create a single node GKE cluster. The minimum node type you’ll need for a 1 node cluster is an n1-standard-4 (on GKE) which has 4 CPU and 15 GB of memory. This is cutting it very close in terms of resources as you also need to host all of the Kubernetes system pods on this same node. Obviously, this works for kicking the tires on Fusion 5.0 but is not sufficient for production workloads.

You can change the instance type using the -i parameter; see: https://cloud.google.com/compute/docs/regions-zones/#available for an list of which machine types are available in your desired region.

Note: If not provided the script generates a custom values file named gke_<cluster>_<release>_fusion_values.yaml which you can use to customize the Fusion chart.

WARNING The setup_f5_gke.sh script installs Helm’s tiller component into your GKE cluster with the cluster admin role. If you don’t want this, then please see Helm w/o Tiller below.

If you see an error similar to the following, then wait a few seconds and try running the setup_f5_gke.sh script again with the same arguments as this is usually a transient issue:

Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

After running the setup_f5_gke.sh script, proceed to the Verifying the Fusion Installation section below.

The steps below show you how to create several kinds of Fusion clusters.

How to create a single-node Fusion demo cluster

A single-node configuration is useful for exploring Fusion in a demo or development environment.

This type of deployment can take at least 12 minutes, plus 3–5 minutes for cluster startup.

How to create a single-node Fusion demo cluster
  1. Run the setup script:

    ./setup_f5_gke.sh -c <cluster> -p <project> -z <zone-name> --create demo
    • <cluster> value should be the name of a non-existent cluster; the script will create the new cluster.

    • <project> must match the name of an existing project in GKE.

      Run gcloud config get-value project to get this value, or see the GKE setup instructions.

    • <zone-name> must match the name of the zone you set in GKE. For a demo cluster, the zone must be a specific Availability Zone and not a Region, such as us-west1-a instead of us-west1

      Run gcloud config get-value compute/zone to get this value, or see the GKE setup instructions to set the value.

    Upon success, the script shows you where to find the Fusion UI. For example:

    Fusion 5 Gateway service exposed at: <some-external-ip>:6764
  2. Access the Fusion UI by pointing your browser to the IP address and port specified in the setup script’s output.

Create a three-node regional cluster to withstand a zone outage

With a three-node regional cluster, nodes are deployed across three separate availability zones.

./setup_f5_gke.sh -c <cluster> -p <project> -z <zone-name> --create multi_az

In this configuration, we want a ZooKeeper and Solr instance on each node, which allows the cluster to retain ZK quorum and remain operational after losing one node, such as during an outage in one availability zone.

When running in a multi-zone cluster, each Solr node has the solr_zone system property set to the zone it is running in, such as -Dsolr_zone=us-west1-a.

GKE Ingress and TLS

The Fusion proxy service provides authentication and serves as an API gateway for accessing all other Fusion services. It’s typical to use an Ingress for TLS termination in front of the proxy service.

The setup_f5_gke.sh supports creating an Ingress with an TLS cert for a domain you own by passing: -t -h <hostname>

After the script runs, you need to create an A record in GCP’s DNS service to map your domain name to the Ingress IP. Once this occurs, our script setup uses Let’s Encrypt to issue a TLS cert for your Ingress.

To see the status of the Let’s Encrypt issued certificate, do:

kubectl get managedcertificates -n <namespace> -o yaml

Please refer to the Kubernetes documentation on configuring an Ingress for GKE: Setting up HTTP Load Balancing with Ingress

Upgrades and Ingress

IMPORTANT If you used the -t -h <hostname> options when installing your cluster, our script created an additional values yaml file named tls-values.yaml.

To make things easier for you when upgrading, you should add the settings from this file into your main custom values yaml file, e.g.:

api-gateway:
  service:
    type: "NodePort"
  ingress:
    enabled: true
    host: "<hostname>"
    tls:
      enabled: true
    annotations:
      "networking.gke.io/managed-certificates": "<RELEASE>-managed-certificate"
      "kubernetes.io/ingress.class": "gce"

This way you don’t have to remember to pass the additional tls-values.yaml file when upgrading.

Upgrade Fusion on GKE

During installation, the script generates a file named gke_<cluster>_<release>_fusion_values.yaml; use this file to customize Fusion settings. After making changes to this file, you need to run the following command:

./setup_f5_gke.sh -c <existing_cluster> -p <gcp_project_id> -r <release> -n <namespace> \
  --values gke_<cluster>_<release>_fusion_values.yaml --upgrade

You will also use the --upgrade option to upgrade to a newer version of Fusion, such as 5.0.2.

If you’re using the default namespace and see an error similar to the following, then simply pass the --force parameter when upgrading:

Namespace default is owned by: , by we are: OWNER please provide the --force parameter if you are sure you wish to upgrade this namespace

This owner label check before upgrading is in place as a safeguard for shared clusters with Fusion deployed to multiple namespaces.

After running the upgrade, use kubectl get pods to see the changes being applied to your cluster. It may take several minutes to perform the upgrade as new Docker images need to be pulled from DockerHub. To see the versions of running pods, do:

kubectl get po -o jsonpath='{..image}'  | tr -s '[[:space:]]' '\n' | sort | uniq

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation. First, let’s review some useful kubectl commands.

Useful kubectl commands

When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:

alias k=kubectl

Set the namespace for kubectl if not using the default:

kubectl config set-context --current --namespace=<NAMESPACE>

This saves you from having to pass -n with every command.

Get a list of running pods: k get pods

Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline

Get pod deployment spec and details: k get pods <pod_id> -o yaml

Get details about a pod events: k describe po <pod_id>

Port forward to a specific pod: k port-forward <pod_id> 8983:8983

SSH into a pod: k exec -it <pod_id> — /bin/bash

CPU/Memory usage report for pods: k top pods

Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0

Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N

Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '' '\n' | sort | uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:

kubectl get pods

If all goes well, you should see a list of pods similar to:

NAME                                     READY   STATUS    RESTARTS   AGE
f5-admin-ui-669bb68f74-pjqtw           1/1     Running   0          19h
f5-api-gateway-6f7fdd69d-bt2nc         1/1     Running   0          19h
f5-auth-ui-b4dfd4f6d-f9tb6             1/1     Running   0          19h
f5-classic-rest-service-0              1/1     Running   1          19h
f5-devops-ui-768cf6f55b-wphsw          1/1     Running   0          19h
f5-fusion-admin-5888f54447-hprt6       1/1     Running   0          19h
f5-fusion-indexing-76dfb65dfd-929f4    1/1     Running   0          19h
f5-insights-686464b75b-6pzw5           1/1     Running   0          19h
f5-job-launcher-5d84c859c4-dl7s9       1/1     Running   0          19h
f5-job-rest-server-fb99fcfd7-lmqvd     1/1     Running   0          19h
f5-logstash-0                          1/1     Running   0          19h
f5-ml-model-service-8574b96c68-jqt88   2/2     Running   0          17h
f5-query-pipeline-77956f56f8-22wg7     1/1     Running   0          19h
f5-rest-service-77ff7d45-rbrn4         1/1     Running   0          19h
f5-rpc-service-67b6f4bf49-2d65g        1/1     Running   1          19h
f5-rules-ui-65d59dc5b4-5ntq9           1/1     Running   0          19h
f5-solr-0                              1/1     Running   0          19h
f5-webapps-7d9497c485-bbtg9            1/1     Running   0          19h
f5-zookeeper-0                         1/1     Running   0          19h

The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values yaml file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.

To see a list of Fusion services, do:

kubectl get svc

For an overview of the various Fusion 5 microservices, see: https://doc.lucidworks.com/fusion-server/5.0/deployment/kubernetes/microservices.html