Product Selector

Fusion 5.12
    Fusion 5.12

    Deploy Fusion 5 on Amazon Elastic Kubernetes Service (EKS)

    Fusion supports deployment on Amazon Elastic Kubernetes Service (EKS). This topic explains how to deploy a Fusion cluster on EKS using the setup_f5_eks.sh script in the fusion-cloud-native repository.

    In addition, this topic provides information about how to configure IAM roles for the service account.

    Prerequisites

    This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

    Release Name and Namespace

    Before installing Fusion, you need to choose a Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.

    NOTE: All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.

    Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

    Install Helm

    Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:

    brew install kubernetes-helm

    If you already have helm installed, make sure you’re using the latest version:

    brew upgrade kubernetes-helm

    For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/

    The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

    Helm User Permissions

    If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.

    When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
    alias k=kubectl

    To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:

    k create namespace fusion-namespace

    Apply the role.yaml and cluster-role.yaml files to that namespace

    k apply -f cluster-role.yaml
    k config set-context --current --namespace=$NAMESPACE
    k apply -f role.yaml

    Then bind the rolebinding and clusterolebinding to the install user:

    k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
    k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>

    You will then be able to run the helm install command as the <install_user>

    Clone fusion-cloud-native from Github

    You should clone this repo from github as you’ll need to run the scripts on your local workstation:

    git clone https://github.com/lucidworks/fusion-cloud-native.git

    You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.

    cd fusion-cloud-native
    git pull

    Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.

    The setup_f5_eks.sh script provided in this repo is strictly optional. The script is mainly to help those new to Kubernetes and/or Fusion get started quickly. If you’re already familiar with K8s, Helm, and EKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described here.

    If you’re new to Amazon Web Services (AWS), then please visit the Amazon Web Services Getting Started Center to set up an account.

    If you’re new to Kubernetes and EKS, then we recommend going through Amazon’s EKS Workshop before proceeding with Fusion.

    Set up the AWS CLI tools

    Before launching an EKS cluster, you need to install and configure kubectl, aws, eksctl, aws-iam-authenticator using the links provided below:

    Required AWS Command-line Tools:
    1. kubectl: Install kubectl

    2. aws: Installing the AWS CLI

    3. eksctl: Getting Started with eksctl

    4. aws-iam-authenticator: AWS IAM Authenticator for Kubernetes

    Run aws configure to configure a profile for authenticating to AWS. You’ll use the profile name you configure in this step, which defaults to default, as the -p argument to the setup_f5_eks.sh script in the next section.

    When working in Ubuntu, avoid using the eksctl snap version. Alternative sources can have different versions that could cause command failures. Also, always make sure you are using the latest version for each one of the required tools.

    Set up Fusion on EKS

    To create a cluster in EKS the following IAM policies are required:

    • AmazonEC2FullAccess

    • AWSCloudFormationFullAccess

    EKS Permissions

    eks:DeleteCluster

    eks:UpdateClusterVersion

    eks:ListUpdates

    eks:DescribeUpdate

    eks:DescribeCluster

    eks:ListClusters

    eks:CreateCluster

    VPC Permissions

    ec2:DeleteSubnet

    ec2:DeleteVpcEndpoints

    ec2:CreateVpc

    ec2:AttachInternetGateway

    ec2:DetachInternetGateway

    ec2:DisassociateSubnetCidrBlock

    ec2:DescribeVpcAttribute

    ec2:AssociateVpcCidrBlock

    ec2:ModifySubnetAttribute

    ec2:DisassociateVpcCidrBlock

    ec2:CreateVpcEndpoint

    ec2:DescribeVpcs

    ec2:CreateInternetGateway

    ec2:AssociateSubnetCidrBlock

    ec2:ModifyVpcAttribute

    ec2:DeleteInternetGateway

    ec2:DeleteVpc

    ec2:CreateSubnet

    ec2:DescribeSubnets

    ec2:ModifyVpcEndpoint

    IAM Permissions

    iam:CreateInstanceProfile

    iam:DeleteInstanceProfile

    iam:GetRole

    iam:GetPolicyVersion

    iam:UntagRole

    iam:GetInstanceProfile

    iam:GetPolicy

    iam:TagRole

    iam:RemoveRoleFromInstanceProfile

    iam:DeletePolicy

    iam:CreateRole

    iam:DeleteRole

    iam:AttachRolePolicy

    iam:PutRolePolicy

    iam:ListInstanceProfiles

    iam:AddRoleToInstanceProfile

    iam:CreatePolicy

    iam:ListInstanceProfilesForRole

    iam:PassRole

    iam:DetachRolePolicy

    iam:DeleteRolePolicy

    iam:CreatePolicyVersion

    iam:GetRolePolicy

    iam:DeletePolicyVersion

    Download and run the setup_f5_eks.sh script to install Fusion 5.x in an EKS cluster.

    This script does not support multiple node pools and should not be used for production clusters.
    • To create a new cluster and install Fusion, run the following command:

      ./setup_f5_eks.sh -c my-eks-cluster -p profile-name -n fusion-namespace --create demo
      • Replace my-eks-cluster, profile-name, and fusion-namespace with your cluster, profile, and namespace values.

      • Pass the --create option with either demo or multi_az.

    • To use an existing cluster and install Fusion, run the following command:

      ./setup_f5_eks.sh -c cluster-name -p profile-name
      • Replace cluster-name with the name of the cluster you already created.

      • Replace profile-name with the name of your profile.

    The profile is automatically set to default if you ran the AWS configure command without giving the profile a name.

    Use the --help option to see full script usage.

    If using Helm V2, the setup_f5_eks.sh script installs Helm’s tiller component into your EKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.
    The setup_f5_eks.sh script creates a service account that provides S3 read-only permissions to the created pods.

    After running the setup_f5_eks.sh script, proceed to the Verifying the Fusion Installation section below.

    EKS cluster overview

    The EKS cluster is created using eksctl (https://eksctl.io/). By default it will setup the following resources in your AWS account:

    • A dedicated VPC for the EKS cluster in the specified region with CIDR: 192.168.0.0/16

    • 3 Public and 3 Private subnets within the created VPC, each with a /19 CIDR range, along with the corresponding route tables.

    • A NAT gateway in each Public subnet

    • An Auto Scaling Group of the instance type specified by the script, which defaults to m5.2xlarge, with 3 instances spanning the public subnets.

    See https://eksctl.io/usage/vpc-networking/ for more information on the networking setup.

    EKS Ingress

    The setup_f5_eks.sh script exposes the Fusion proxy service on an external DNS name provided by an ELB over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service. See: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/

    Our EKS script creates a classic ELB for exposing fusion proxy service. In case you need to change this behavior and use AWS Load Balancer Controller instead you can use the following parameters when running the setup_f5_eks.sh script:

    --deploy-alb     # Tells the script to deploy an ALB

    By default the kube-system namespace is being used for installing the aws-load-balancer-controller because pods priorityClassName is set to system-cluster-critical.

    In case you need to deploy an internal ALB you can use the --internal-alb option. This will create the nodes in the internal subnets. Fusion will be reachable from an AWS instance located in any of the external subnets on the same VPC. To use an ALB also an ingress with a DNS name is required, you can use the -h option to create an ingress with the required DNS name.

    Finally, use Route 53 or your DNS provider for creating an A ALIAS DNS record for your DNS name pointing to the ingress ADRESS. You can get the address listing the ingress using the command kubectl get ing.

    Provide access to the EKS cluster to other users

    Initially, only the user that created the Amazon EKS cluster has system:masters permissions to configure the cluster. In order to extend the permissions, a ConfigMap should be created to allow access to IAM users or roles.

    For providing these permissions, use the following yaml file as a template, replacing the required values:

    aws-auth.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: <node_instance_role_arn>
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
      mapUsers: |
        - userarn: arn:aws:iam::<account_id>:user/<username>
          username: <username>
          groups:
            - system:masters

    Use the following command for applying the yaml file: kubectl apply -f aws-auth.yaml

    Remove EKS cluster

    In case you have deployed an ALB ingress controller, you would need to remove the policy that was created for managing the ALB before removing the cluster. You can use the following command for it:

    aws iam --profile <profile-name> delete-policy --policy-arn arn:aws:iam::<account_id>:policy/eksctl-<cluster-name>-alb-policy

    Also you can remove it manually using the AWS IAM console, searching for eksctl-<cluster-name>-alb-policy.

    After that you should remove the ALB with helm delete, list the current releases with helm list.

    The EKS cluster is created using Cloudformation stacks so you need to remove them to delete the cluster, you can check them in the AWS Cloudformation Console, check for the following stacks:

    • eksctl-<cluster-name>-nodegroup-standard-workers

    • eksctl-<cluster-name>-cluster

    The eksctl-<cluster-name>-nodegroup-standard-workers stack should be the first to be removed. After that we can remove the eksctl-<cluster-name>-cluster stack.

    Also you can use the following commands>:

    aws cloudformation --profile <profile-name> delete-stack --stack-name eksctl-<cluster-name>-nodegroup-standard-workers
    aws cloudformation --profile <profile-name> delete-stack --stack-name eksctl-<cluster-name>-cluster

    Verifying the Fusion Installation

    In this section, we provide some tips on how to verify the Fusion installation.

    Check if the Fusion Admin UI is available at https://<fusion-host>:6764/admin/.

    Let’s review some useful kubectl commands.

    Enhance the K8s Command-line Experience

    Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

    Useful kubectl commands

    Set the namespace for kubectl if not using the default:

    kubectl config set-context --current --namespace=<NAMESPACE>

    This saves you from having to pass -n with every command.

    Get a list of running pods: k get pods

    Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline

    Get pod deployment spec and details: k get pods <pod_id> -o yaml

    Get details about a pod events: k describe po <pod_id>

    Port forward to a specific pod: k port-forward <pod_id> 8983:8983

    SSH into a pod: k exec -it <pod_id> — /bin/bash

    CPU/Memory usage report for pods: k top pods

    Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0

    Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N

    Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '' '\n' | sort | uniq

    Check Fusion Pods and Services

    Once the install script completes, you can check that all pods and services are available using:

    kubectl get pods

    If all goes well, you should see a list of pods similar to:

    NAME                                                        READY   STATUS    RESTARTS   AGE
    seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
    f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
    f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
    f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
    f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
    f5-classic-rest-service-0                                   1/1     Running   3          8m44s
    f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
    f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
    f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
    f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
    f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
    f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
    f5-logstash-0                                               1/1     Running   0          8m45s
    f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
    f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
    f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
    f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
    f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
    f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
    f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
    f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
    f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
    f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
    f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
    f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
    f5-solr-0                                                   1/1     Running   0          8m44s
    f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
    f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
    f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
    f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
    f5-zookeeper-0                                              1/1     Running   0          8m45s

    The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.

    To see a list of Fusion services, do:

    kubectl get svc

    For an overview of the various Fusion 5 microservices, see: Fusion microservices.

    Once you’re ready to build a Fusion cluster for production, please see see more information at Fusion 5 Survival Guide.

    Upgrading with Zero Downtime

    One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.

    When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.

    Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.

    To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.

    The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

    Grafana Dashboards

    Get the initial Grafana password from a K8s secret by doing:

    kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
      -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

    With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace ${RELEASE} with your Helm release label):

    kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000

    You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.

    This will log you into the application. It is recommended that you create another administrative user with a more desirable password.

    The dashboards and datasoure will be setup for you in grafana, simply navigate to DashboardsManage to view the vailable dashboards

    Configure IAM roles for the service account

    Configuring IAM roles lets you utilize the Amazon Web Services Security Token Service (AWS STS) for short-term authentication credentials to access services like the Amazon S3 simple storage service.

    To configure IAM roles, your user account must be granted admin permissions or IAM:FullAccess. Complete the following steps:

    1. To create the OpenID Connect (OIDC) provider, run the following command:

      eksctl utils associate-iam-oidc-provider --cluster cluster_name --approve
    2. To create an IAM role for the service account associated with the plugin-pod, run the following command:

      eksctl create iamserviceaccount --name f5-connector-plugin --namespace default --cluster cluster_name            --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess  --approve  --override-existing-serviceaccounts

      This command:

      • Creates an IAM role and attaches the target policy.

      • Updates the existing Kubernetes f5-connector-plugin service account and annotates it with the IAM role.

      • Uses the existing policy/AmazonS3ReadOnlyAccess policy.

    If the IAM role was already created without the command, and you want to associate the service account, run the following command:

    kubectl annotate serviceaccount -n default f5-connector-plugin eks.amazonaws.com/role-arn=arn:aws:iam::411271863668:role/FUS_ROLE --overwrite=true
    To utilize this feature in Fusion 5.3 and later, create a data source with the settings in S3 Authentication Settings > AWS Instance Credentials Authentication Settings. For detailed installation information, AWS S3 V2 connector.

    For more information, see:

    Additional resources

    Lucidworks offers free training to help you get started with Fusion. Check out the Deploying Fusion 5 course, which focuses on the prerequisite software needed to deploy Fusion, the necessary setup steps, and the physical act of deployment:

    Deploying Fusion 5

    Visit the LucidAcademy to see the full training catalog.