How to Deploy Fusion on Amazon Elastic Kubernetes Service (EKS)
- Set up the AWS CLI tools
- Set up Fusion on EKS
- Upgrade Fusion on EKS
- Provide access to the EKS cluster to other users
- Verifying the Fusion Installation
Fusion supports deployment on Amazon Elastic Kubernetes Service (EKS). This topic explains how to deploy a Fusion cluster on EKS using the setup_f5_eks.sh
script in the fusion-cloud-native
repository.
The setup_f5_eks.sh
script provided in this repo is strictly optional.
The script is mainly to help those new to Kubernetes and/or Fusion get started quickly.
If you’re already familiar with K8s, Helm, and EKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described here.
If you’re new to Amazon Web Services (AWS), then please visit the Amazon Web Services Getting Started Center to set up an account.
If you’re new to Kubernetes and EKS, then we recommend going through Amazon’s EKS Workshop before proceeding with Fusion.
Set up the AWS CLI tools
Before launching an EKS cluster, you need to install and configure kubectl
, aws
, eksctl
, aws-iam-authenticator
using the links provided below:
-
kubectl: Install kubectl
-
eksctl: Getting Started with eksctl
-
aws-iam-authenticator: AWS IAM Authenticator for Kubernetes
Run aws configure
to configure a profile for authenticating to AWS. You’ll use the profile name you configure in this step, which defaults to default
, as the -p
argument to the setup_f5_eks.sh
script in the next section.
Note
|
When working in Ubuntu, avoid using the eksctl snap version. Alternative sources can have different versions that could cause command failures. |
Set up Fusion on EKS
To create a cluster in EKS the following IAM policies are required:
-
AmazonEC2FullAccess
-
AWSCloudFormationFullAccess
-
EKS Permissions:
-
eks:DeleteCluster
-
eks:UpdateClusterVersion
-
eks:ListUpdates
-
eks:DescribeUpdate
-
eks:DescribeCluster
-
eks:ListClusters
-
eks:CreateCluster
-
-
VPC Permissions:
-
ec2:DeleteSubnet
-
ec2:DeleteVpcEndpoints
-
ec2:CreateVpc
-
ec2:AttachInternetGateway
-
ec2:DetachInternetGateway
-
ec2:DisassociateSubnetCidrBlock
-
ec2:DescribeVpcAttribute
-
ec2:AssociateVpcCidrBlock
-
ec2:ModifySubnetAttribute
-
ec2:DisassociateVpcCidrBlock
-
ec2:CreateVpcEndpoint
-
ec2:DescribeVpcs
-
ec2:CreateInternetGateway
-
ec2:AssociateSubnetCidrBlock
-
ec2:ModifyVpcAttribute
-
ec2:DeleteInternetGateway
-
ec2:DeleteVpc
-
ec2:CreateSubnet
-
ec2:DescribeSubnets
-
ec2:ModifyVpcEndpoint
-
-
IAM Permissions:
-
iam:CreateInstanceProfile
-
iam:DeleteInstanceProfile
-
iam:GetRole
-
iam:GetPolicyVersion
-
iam:UntagRole
-
iam:GetInstanceProfile
-
iam:GetPolicy
-
iam:TagRole
-
iam:RemoveRoleFromInstanceProfile
-
iam:DeletePolicy
-
iam:CreateRole
-
iam:DeleteRole
-
iam:AttachRolePolicy
-
iam:PutRolePolicy
-
iam:ListInstanceProfiles
-
iam:AddRoleToInstanceProfile
-
iam:CreatePolicy
-
iam:ListInstanceProfilesForRole
-
iam:PassRole
-
iam:DetachRolePolicy
-
iam:DeleteRolePolicy
-
iam:CreatePolicyVersion
-
iam:GetRolePolicy
-
iam:DeletePolicyVersion
-
Download and run the setup_f5_eks.sh
script to install Fusion 5.x in a EKS cluster. To create a new cluster and install Fusion, simply do:
./setup_f5_eks.sh -c <cluster_name> -p <aks_resource_group>
If you want the script to create a cluster for you (the default behavior), then you need to pass the --create
option with either demo
or multi_az
.
If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c
parameter.
Use the --help
option to see full script usage.
WARNING The setup_f5_eks.sh
script installs Helm’s tiller
component into your EKS cluster with the cluster admin role. If you don’t want this, then please see Helm w/o Tiller below.
WARNING The setup_f5_eks.sh
script creates a service account that provides S3 read-only permissions to the created pods.
After running the setup_f5_eks.sh
script, proceed to the Verifying the Fusion Installation section below.
EKS cluster overview
The EKS cluster is created using eksctl
(https://eksctl.io/). By default it will setup the following resources in your AWS account:
-
A dedicated VPC for the EKS cluster in the specified region with CIDR:
192.168.0.0/16
-
3 Public and 3 Private subnets within the created VPC, each with a
/19
CIDR range, along with the corresponding route tables. -
A NAT gateway in each Public subnet
-
An Auto Scaling Group of the instance type specified by the script, which defaults to
m5.2xlarge
, with 3 instances spanning the public subnets.
See https://eksctl.io/usage/vpc-networking/ for more information on the networking setup.
EKS Ingress
The setup_f5_eks.sh
script exposes the Fusion proxy service on an external IP over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service.
See: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
Upgrade Fusion on EKS
During installation, the script generates a file named eks_<cluster>_<release>_fusion_values.yaml
. Use this file to customize Fusion settings. After making changes to this file, run the following command:
./setup_f5_eks.sh -c <existing_cluster> -p <aks_resource_group> -r <release> -n <namespace> \
--values eks_<cluster>_<release>_fusion_values.yaml --upgrade
You will also use the --upgrade
option to upgrade to a newer version of Fusion, such as 5.0.2.
Provide access to the EKS cluster to other users
Initially, only the user that created the Amazon EKS cluster has system:masters
permissions to configure the cluster. In order to extend the permissions, a ConfigMap
should be created to allow access to IAM users or roles.
For providing these permissions, use the following yaml file as a template, replacing the required values:
aws-auth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <node_instance_role_arn>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::<account_id>:user/<username>
username: <username>
groups:
- system:masters
Use the following command for applying the yaml file: kubectl apply -f aws-auth.yaml
Verifying the Fusion Installation
In this section, we provide some tips on how to verify the Fusion installation. First, let’s review some useful kubectl commands.
Useful kubectl commands
When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl
, e.g.:
alias k=kubectl
Set the namespace for kubectl
if not using the default:
kubectl config set-context --current --namespace=<NAMESPACE>
This saves you from having to pass -n
with every command.
Get a list of running pods: k get pods
Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline
Get pod deployment spec and details: k get pods <pod_id> -o yaml
Get details about a pod events: k describe po <pod_id>
Port forward to a specific pod: k port-forward <pod_id> 8983:8983
SSH into a pod: k exec -it <pod_id> — /bin/bash
CPU/Memory usage report for pods: k top pods
Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0
Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N
Check Fusion Pods and Services
Once the install script completes, you can check that all pods and services are available using:
kubectl get pods
If all goes well, you should see a list of pods similar to:
NAME READY STATUS RESTARTS AGE
f5-admin-ui-669bb68f74-pjqtw 1/1 Running 0 19h
f5-api-gateway-6f7fdd69d-bt2nc 1/1 Running 0 19h
f5-auth-ui-b4dfd4f6d-f9tb6 1/1 Running 0 19h
f5-classic-rest-service-0 1/1 Running 1 19h
f5-devops-ui-768cf6f55b-wphsw 1/1 Running 0 19h
f5-fusion-admin-5888f54447-hprt6 1/1 Running 0 19h
f5-fusion-indexing-76dfb65dfd-929f4 1/1 Running 0 19h
f5-insights-686464b75b-6pzw5 1/1 Running 0 19h
f5-job-launcher-5d84c859c4-dl7s9 1/1 Running 0 19h
f5-job-rest-server-fb99fcfd7-lmqvd 1/1 Running 0 19h
f5-logstash-0 1/1 Running 0 19h
f5-ml-model-service-8574b96c68-jqt88 2/2 Running 0 17h
f5-query-pipeline-77956f56f8-22wg7 1/1 Running 0 19h
f5-rest-service-77ff7d45-rbrn4 1/1 Running 0 19h
f5-rpc-service-67b6f4bf49-2d65g 1/1 Running 1 19h
f5-rules-ui-65d59dc5b4-5ntq9 1/1 Running 0 19h
f5-solr-0 1/1 Running 0 19h
f5-webapps-7d9497c485-bbtg9 1/1 Running 0 19h
f5-zookeeper-0 1/1 Running 0 19h
The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values yaml file.
Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them.
You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time,
use kubectl logs <pod_id>
to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p
.
You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>
.
To see a list of Fusion services, do:
kubectl get svc
For an overview of the various Fusion 5 microservices, see: https://doc.lucidworks.com/fusion-server/5.0/deployment/kubernetes/microservices.html