Deploy Fusion 5 on Google Kubernetes Engine (GKE)
setup_f5_gke.sh
script in the fusion-cloud-native
repository.helm
as it is required to install Fusion for any K8s platform.
On MacOS, you can do:3.0.0
; check your Helm version by running helm version --short
.install-roles
directory.kubectl
, e.g.:role.yaml
and cluster-role.yaml
files to that namespacehelm install
command as the <install_user>
fusion-cloud-native-master
directory.helm
as it is required to install Fusion for any K8s platform.
On MacOS, you can do:3.0.0
; check your Helm version by running helm version --short
.install-roles
directory.kubectl
, e.g.:role.yaml
and cluster-role.yaml
files to that namespacehelm install
command as the <install_user>
fusion-cloud-native-master
directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_gke.sh setup_f5_gke.sh
script provided in this repo is strictly optional.
The script is mainly to help those new to Kubernetes and/or Fusion get started quickly.
If you’re already familiar with K8s, Helm, and GKE, then you can skip the script and just use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>
.gcloud
command-line tools, you can skip to <<cluster-create,Create a Fusion cluster in GKE>>
.These steps set up your local Google Cloud SDK environment so that you’re ready to use the command-line tools to manage your Fusion deployment.Usually, you only need to perform these setup steps once. After that, you’re ready to link:#cluster-create[create a cluster].For a nice getting started tutorial for GKE, see: https://cloud.google.com/kubernetes-engine/docs/deploy-app-clusterHow to set up the Google Cloud SDK:gcloud auth login
gcloud config set compute/zone <zone-name>
If you are working with regional clusters instead of zone clusters, use gcloud config set compute/region <region-name>
instead.gcloud config set core/account <email address>
gcloud projects create <new-project-name>
If you have already created a project, for example in https://console.cloud.google.com/, then skip to the next step.gcloud config set project <project-name>
kubectl
using:setup_f5_gke.sh
script to install Fusion 5.x in a GKE cluster. To create a new, single-node demo cluster and install Fusion, simply do:--help
option to see script usage. If you want the script to create a cluster for you, then you need to pass the --create
option with either demo
or multi_az
. If you don’t want the script to create a cluster, then you need to create a cluster before running the script and simply pass the name of the existing cluster using the -c
parameter.If you pass --create demo
to the script, then we create a single node GKE cluster (defaults to using n1-standard-8
node type). The minimum node type you’ll need for a 1 node cluster is an n1-standard-8
(on GKE) which has 8 CPU and 30 GB of memory. This is cutting it very close in terms of resources as you also need to host all of the Kubernetes system pods on this same node. Obviously, this works for kicking the tires on Fusion 5.1 but is not sufficient for production workloads.You can change the instance type using the -i
parameter; see: https://cloud.google.com/compute/docs/regions-zones/#available for an list of which machine types are available in your desired region.gke_<cluster>_<namespace>_fusion_values.yaml
which you can use to customize the Fusion chart.__setup_f5_gke.sh
script installs Helm’s tiller
component into your GKE cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.If you see an error similar to the following, then wait a few seconds and try running the setup_f5_gke.sh
script again with the same arguments as this is usually a transient issue:setup_f5_gke.sh
script, proceed to the <<verifying,Verifying the Fusion Installation>>
section below.When you’re ready to deploy Fusion to a production-like environment, see more information at Fusion 5 Survival Guide.<cluster>
value should be the name of a non-existent cluster; the script will create the new cluster.<project>
must match the name of an existing project in GKE. Run gcloud config get-value project
to get this value, or see the link:#sdk-setup[GKE setup instructions].<namespace>
Kubernetes namespace to install Fusion into, defaults to default
with release f5
<region-name>
value should be the name of a GKE region, defaults to us-west1
. Run gcloud config get-value compute/zone
to get this value, or see the link:#sdk-setup[GKE setup instructions] to set the value.solr_zone
system property set to the zone it is running in, such as -Dsolr_zone=us-west1-a
.After running the setup_f5_gke.sh
script, proceed to the <<verifying,Verifying the Fusion Installation>>
section below.When you’re ready to deploy Fusion to a production-like environment, see more information at Fusion 5 Survival Guide.setup_f5_gke.sh
supports creating an Ingress with an TLS cert for a domain you own by passing: -t -h <hostname>
After the script runs, you need to create an A record in GCP’s DNS service to map your domain name to the Ingress IP. Once this occurs, our script setup uses https://letsencrypt.org/ to issue a TLS cert for your Ingress.To see the status of the Let’s Encrypt issued certificate, do:externalTrafficPolicy
of the proxy
service to Local
. This preserves the client
source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially
imbalanced traffic spreading. Although when running in a cluster with a dedicated pool for spark jobs
that can scale up and down freely it can prevent unwanted request failures. This behaviour can be
altered with the api-gateway.service.externalTrafficPolicy
value, which is set to Local
if the example values
file is used.You must use externalTrafficPolicy
=Local
for the Trusted HTTP Realm to work correctly.If you are already using a custom ‘values.yaml’ file, create an entry for externalTrafficPolicy
under api-gateway
service.nginx
ingress controller to fulfil your ingress definitions there are a couple
of options that are recommended to be set in the configmap:example-values
folder.
These can be passed to the install script using the --values
option, for example:--values
option can be passed multiple times, if the same configuration property is contained within multiple values
files then the values from the latest file passed as a --values
option are used.connector-plugin
section under pluginValues
. The pluginValues
section is a list of plugins and its resources. The following sample shows an example.<1>
The plugin ID. The plugin ID must match the plugin ID on the plugin ZIP file. without the lucidworks.
prefix. For example, if the plugin ID on the plugin ZIP file is lucidworks.sharepoint-optimized
, the plugin ID is sharepoint-optimized
.<2>
The resources settings. You may specify the limits, the requests, and the CPU and memory for each.<3>
The number of replicas per connector. This value is 1 by default.connector-plugin
section, you must reinstall the affected connector.-t -h <hostname>
options when installing your cluster, our script created an additional values yaml file named tls-values.yaml
.tls-values.yaml
file when upgrading.\https://<fusion-host>:6764/admin/
.kubectl
if not using the default:-n
with every command.Get a list of running pods: k get pods
Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline
Get pod deployment spec and details: k get pods <pod_id> -o yaml
Get details about a pod events: k describe po <pod_id>
Port forward to a specific pod: k port-forward <pod_id> 8983:8983
SSH into a pod: k exec -it <pod_id> -- /bin/bash
CPU/Memory usage report for pods: k top pods
Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0
Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N
Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq
kubectl logs <pod_id>
to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p
.
You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>
.To see a list of Fusion services, do:--upgrade
option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.
To see what would be upgraded, you can pass the --dry-run
option to the script.kubectl get services --namespace <namespace>
to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000
and enter the username admin@localhost
and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards
-> Manage
to view the vailable dashboardsDeploy Fusion 5 on Amazon Elastic Kubernetes Service (EKS)
setup_f5_eks.sh
script in the fusion-cloud-native
repository.In addition, this topic provides information about how to configure IAM roles for the service account.helm
as it is required to install Fusion for any K8s platform.
On MacOS, you can do:3.0.0
; check your Helm version by running helm version --short
.install-roles
directory.kubectl
, e.g.:role.yaml
and cluster-role.yaml
files to that namespacehelm install
command as the <install_user>
fusion-cloud-native-master
directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_eks.sh setup_f5_eks.sh
script provided in this repo is strictly optional.
The script is mainly to help those new to Kubernetes and/or Fusion get started quickly.
If you’re already familiar with K8s, Helm, and EKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>
.If you’re new to Amazon Web Services (AWS), then please visit the Amazon Web Services https://aws.amazon.com/getting-started/ to set up an account.If you’re new to Kubernetes and EKS, then we recommend going through Amazon’s https://eksworkshop.com/introduction/ before proceeding with Fusion.kubectl
, aws
, eksctl
, aws-iam-authenticator
using the links provided below:Required AWS Command-line Tools:aws configure
to configure a profile for authenticating to AWS. You’ll use the profile name you configure in this step, which defaults to default
, as the -p
argument to the setup_f5_eks.sh
script in the next section.setup_f5_eks.sh
script to install Fusion 5.x in an EKS cluster.my-eks-cluster
, profile-name
, and fusion-namespace
with your cluster, profile, and namespace values.--create
option with either demo
or multi_az
.cluster-name
with the name of the cluster you already created.profile-name
with the name of your profile.default
if you ran the AWS configure command without giving the profile a name.Use the --help
option to see full script usage.setup_f5_eks.sh
script installs Helm’s tiller
component into your EKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.setup_f5_eks.sh
script creates a service account that provides S3 read-only permissions to the created pods.setup_f5_eks.sh
script, proceed to the <<verifying,Verifying the Fusion Installation>>
section below.eksctl
(https://eksctl.io/). By default it will setup the following resources in your AWS account:192.168.0.0/16
/19
CIDR range, along with the corresponding route tables.m5.2xlarge
, with 3 instances spanning the public subnets.setup_f5_eks.sh
script exposes the Fusion proxy service on an external DNS name provided by an ELB over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service.
See: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/Our EKS script creates a classic ELB for exposing fusion proxy service. In case you need to change this behavior and use https://github.com/kubernetes-sigs/aws-load-balancer-controller instead you can use the following parameters when running the setup_f5_eks.sh
script:kube-system
namespace is being used for installing the aws-load-balancer-controller
because pods priorityClassName
is set to system-cluster-critical
.In case you need to deploy an internal ALB you can use the --internal-alb
option. This will create the nodes in the internal subnets. Fusion will be reachable from an AWS instance located in any of the external subnets on the same VPC. To use an ALB also an ingress with a DNS name is required, you can use the -h
option to create an ingress with the required DNS name.Finally, use Route 53 or your DNS provider for creating an A ALIAS DNS record for your DNS name pointing to the ingress ADRESS. You can get the address listing the ingress using the command kubectl get ing
.\https://<fusion-host>:6764/admin/
.kubectl
if not using the default:-n
with every command.Get a list of running pods: k get pods
Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline
Get pod deployment spec and details: k get pods <pod_id> -o yaml
Get details about a pod events: k describe po <pod_id>
Port forward to a specific pod: k port-forward <pod_id> 8983:8983
SSH into a pod: k exec -it <pod_id> -- /bin/bash
CPU/Memory usage report for pods: k top pods
Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0
Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N
Get a list of pod versions: k get po -o jsonpath='{..image}' tr -s '[[:space:]]' '\n' sort uniq
kubectl logs <pod_id>
to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p
.
You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>
.To see a list of Fusion services, do:--upgrade
option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.
To see what would be upgraded, you can pass the --dry-run
option to the script.kubectl get services --namespace <namespace>
to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000
and enter the username admin@localhost
and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards
-> Manage
to view the vailable dashboardsadmin
permissions or IAM:FullAccess
. Complete the following steps:f5-connector-plugin
service account and annotates it with the IAM role.policy/AmazonS3ReadOnlyAccess
policy.Deploy Fusion 5 on Azure Kubernetes Service (AKS)
setup_f5_aks.sh
script in the fusion-cloud-native
repository.setup_f5_aks.sh
script is the basic foundation for getting started and proof-of-concept purposes. For information about custom values in a production-ready environment, see Custom values YAML file.helm
as it is required to install Fusion for any K8s platform.
On MacOS, you can do:3.0.0
; check your Helm version by running helm version --short
.install-roles
directory.kubectl
, e.g.:role.yaml
and cluster-role.yaml
files to that namespacehelm install
command as the <install_user>
fusion-cloud-native-master
directory.The https://github.com/lucidworks/fusion-cloud-native/blob/master/setup_f5_aks.sh script provided in this repo is strictly optional.
The script is mainly to help those new to Kubernetes and/or Fusion get started quickly.
If you’re already familiar with K8s, Helm, and AKS, then you use Helm directly to install Fusion into an existing cluster or one you create yourself using the process described <<helm-only,here>>
.If you’re new to Azure, then please visit https://azure.microsoft.com/en-us/free/search/ to set up an account.kubectl
and az
using the links provided below:Required AKS Command-line Tools:kubectl
: https://kubernetes.io/docs/tasks/tools/install-kubectl/az
: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latestaz login
command (az login –help
to see available options).westus2
. For a list of locations you can choose, see https://azure.microsoft.com/en-us/global-infrastructure/locations/.Use the Azure console in your browser to create a resource group, or simply do:azure-cli
(az
) command-line tools installed.az
login working.-c
parameter.Use the --help
option to see full script usage.By default, our script installs Fusion into the default namespace; think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.You can override the namespace using the -n
option. In addition, our script uses f5 for the Helm release name; you can customize this using the -r
option. Helm uses the release name you provide to track a specific instance of an installation, allowing you to perform updates and rollback changes for that specific release only.You can also pass the --preview
option to the script, which enables soon-to-be-released features for AKS, such as deploying a multi-zone cluster across 3 availability zones for higher availability guarantees. For more information about the Availability Zone feature, see https://docs.microsoft.com/en-us/azure/aks/availability-zones.It takes a while for AKS to spin up the new cluster. The cluster will have three Standard_D4_v3 nodes which have 4 CPU cores and 16 GB of memory. Behind the scenes, our script calls the az aks create
command.setup_f5_aks.sh
script installs Helm’s tiller
component into your AKS cluster with the cluster admin role. If you don’t want this, then please upgrade to Helm v3.setup_f5_aks.sh
script, proceed to <<verifying,Verifying the Fusion Installation>>
.setup_f5_aks.sh
script exposes the Fusion proxy service on an external IP over HTTP. This is done for demo or getting started purposes. However, you’re strongly encouraged to configure a K8s Ingress with TLS termination in front of the proxy service.Use the -t
and -h <hostname>
options to have our script create an Ingress with a TLS certificate issued by Let’s Encrypt.-t -h <hostname>
options when installing your cluster, our script created an additional values yaml file named tls-values.yaml
.To make things easier for you when upgrading, you should add the settings from this file into your main custom values yaml file. For example:tls-values.yaml
file when upgrading.\https://<fusion-host>:6764/admin/
.kubectl
if not using the default:-n
with every command.Get a list of running pods: k get pods
Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline
Get pod deployment spec and details: k get pods <pod_id> -o yaml
Get details about a pod events: k describe po <pod_id>
Port forward to a specific pod: k port-forward <pod_id> 8983:8983
SSH into a pod: k exec -it <pod_id> -- /bin/bash
CPU/Memory usage report for pods: k top pods
Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0
Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N
Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq
kubectl logs <pod_id>
to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p
.
You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>
.To see a list of Fusion services, do:--upgrade
option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.
To see what would be upgraded, you can pass the --dry-run
option to the script.kubectl get services --namespace <namespace>
to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000
and enter the username admin@localhost
and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards
-> Manage
to view the vailable dashboardsDeploy Fusion 5 on Other Kubernetes Platforms
setup_f5_k8s.sh
script in the fusion-cloud-native
repository provides deployment support for any Kubernetes platform, including on-premise, private cloud, public cloud, and hybrid platforms.This script is used by the setup_f5_gke.sh
, setup_f5_eks.sh
, and setup_f5_aks.sh
scripts, which provide additional platform-specific support for Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).This topic explains how to deploy a Fusion cluster in Kubernetes using the setup_f5_k8s.sh
script in the fusion-cloud-native
repository.If you’re deploying on-premises or using a localized repository, you’ll need to use a private repository for Docker images.helm
as it is required to install Fusion for any K8s platform.
On MacOS, you can do:3.0.0
; check your Helm version by running helm version --short
.install-roles
directory.kubectl
, e.g.:role.yaml
and cluster-role.yaml
files to that namespacehelm install
command as the <install_user>
fusion-cloud-native-master
directory.YAML
file. If you use one of our setup scripts, such as setup_f5_gke.sh
, then it will create a custom values YAML file for you the first time you run it using https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.yaml.example as a template.If you’re working with Helm directly and not using one of our setup scripts, then run the https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.sh script to create a custom values YAML file from our https://github.com/lucidworks/fusion-cloud-native/blob/master/customize_fusion_values.yaml.example template as a starting point:--help
for usage details.<provider>
is the K8s platform you’re running on, such as gke
<cluster>
is the name of your cluster
<namespace>
is the K8s namespace where you plan to install Fusion
--node-pool
option specifies the node selector label for determining which nodes to run Fusion pods. You can pass "{}"
to let Kubernetes decide which nodes to schedule pods on.${MY_VALUES}
in the commands belo. Replace the filename with the correct filename for your environment. Keep this file handy, as you’ll need it to customize Fusion settings and upgrade to a newer version.Review the settings in the custom values YAML file to ensure the defaults are appropriate for your environment, including the number of Solr and Zookeeper replicas.Add the Lucidworks Helm repo:customize_fusion_values.sh
script creates an upgrade script to install/upgrade Fusion into Kubernetes using Helm. Look in the directory where you ran customize_fusion_values.sh
for a script named like:
<provider>_<cluster>_<namespace>_upgrade_fusion.sh
. Run this script to install Fusion.\https://<fusion-host>:6764/admin/
.kubectl
if not using the default:-n
with every command.Get a list of running pods: k get pods
Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline
Get pod deployment spec and details: k get pods <pod_id> -o yaml
Get details about a pod events: k describe po <pod_id>
Port forward to a specific pod: k port-forward <pod_id> 8983:8983
SSH into a pod: k exec -it <pod_id> -- /bin/bash
CPU/Memory usage report for pods: k top pods
Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0
Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N
Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '[[:space:]]' '\n' | sort | uniq
kubectl logs <pod_id>
to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p
.
You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>
.To see a list of Fusion services, do:--upgrade
option with our setup scripts in this repo.The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks.
To see what would be upgraded, you can pass the --dry-run
option to the script.kubectl get services --namespace <namespace>
to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000
and enter the username admin@localhost
and the password that was returned in the previous step.This will log you into the application. It is recommended that you create another administrative user with a more desirable password.The dashboards and datasoure will be setup for you in grafana, simply navigate to Dashboards
-> Manage
to view the vailable dashboardsDeploy Fusion at Scale
setup_f5_*.sh
scripts are handy for getting started and proof-of-concept purposes, this article covers the planning process for building a production-ready environment.gcloud
or aws
, and kubectl
.
See the platform-specific instructions linked above, or check with your cloud provider.-c
arg)fusion-cloud-native
repository: git clone https://github.com/lucidworks/fusion-cloud-native
customize_fusion_values.sh
script.
--help
parameter to see script usage details.File | Description |
---|---|
<provider>_<cluster>_<namespace>_fusion_values.yaml | Main custom values YAML used to override Helm chart defaults for Fusion microservices. |
<provider>_<cluster>_<namespace>_monitoring_values.yaml | Custom values yaml used to configure Prometheus and Grafana. |
<provider>_<cluster>_<namespace>_fusion_resources.yaml | Resource requests and limits for all Microservices. |
<provider>_<cluster>_<namespace>_fusion_affinity.yaml | Pod affinity rules to ensure mulitple replicas for a single service are evenly distributed across zones and nodes. |
<provider>_<cluster>_<namespace>_upgrade_fusion.sh | Script used to install and/or upgrade Fusion using the aforementioned custom values YAML files. |
<provider>_<cluster>_<release>_fusion_values.yaml
file to familiarize yourself with its structure and contents. Notice it contains a separate section for each of the Fusion microservices. The example configuration of the query-pipeline
service below illustrates some important concepts about the custom values YAML file.
<provider>_<cluster>_<namespace>_fusion_values.yaml
. For example, gke_search_f5_fusion_values.yaml
.Parameter | Description |
---|---|
<provider> | The K8s platform you’re running on, such as gke . |
<cluster> | The name of your cluster. |
<namespace> | The K8s namespace where you want to install Fusion. |
<node_selector> | Specifies a nodeSelector label to find nodes to schedule Fusion pods on. |
--node-pool <node_selector>
label is very important. Using the wrong value will cause your pods to be stuck in the pending
state. If you’re not sure about the correct value for your cluster, pass ’’` to let Kubernetes decide which nodes to schedule Fusion pods on.nodeSelector
labels are provider-specific. The fusion-cloud-native
scripts use the following defaults for GKE and EKS:Provider | Default node selector |
---|---|
GKE | cloud.google.com/gke-nodepool: default-pool |
EKS | alpha.eksctl.io/nodegroup-name: standard-workers |
values.yaml
file to avoid a known issue that prevents the kuberay-operator
pod from launching successfully: yaml kuberay-operator: crd: create: true
Flag | Description |
---|---|
--node-pool | Add a Fusion specific label to your nodes. |
--with-resource-limits | Configure resource requests/limits. |
--with-replicas | Configure replica counts. |
--with-affinity-rules | Configure pod affinity rules for Fusion services. |
--node-pool
to add a Fusion specific label to your nodes by doing:--node-pool 'fusion_node_type: <NODE_LABEL>'
.storageClass.yaml
with the following contents:nodePools
property. If any property for that statefulset needs to be changed from the default set of values, then it can be set directly on the object representing the node pool, any properties that are omitted are defaulted to the base value. See the following example (additional whitespace added for display purposes only):""
is the suffix for the default partition.fusion_node_type=analytics
. You can use the fusion_node_type
property in Solr auto-scaling policies to govern replica placement during collection creation.fusion_node_type=search
.nodePools
section above.nodePools
value ""
.replicaCount
, or number of Solr pods, is six. The search partition replicaCount
is twelve.Each nodePool is automatically be assigned the -Dfusion_node_type property of <search>
, <system>
, or <analytics>
. This value matches the name of the nodePool. For example, -Dfusion_node_type=<search>
.The Solr pods have a fusion_node_type
system property, as shown below:--set global.networkPolicyEnabled=true
when installing the Fusion Helm chart.envoy
.envoy
. You need at least 100GB of free disk for Docker.envoy
’s local registry. For example, to pull the query pipeline image, run docker pull lucidworks/query-pipeline:5.9.0
. See docker pull --help
for more information about pulling Docker images.envoy
to the private Docker registry, most likely via a VPN connection. In this example, the private Docker registry is referred to as <internal-private-registry>
.envoy
’s Docker registry to the private registry. This will take a long time.
imagePullSecrets
setting using custom values YAML. However, other 3rd party services—including Zookeeper, Pulsar, Prometheus, and Grafana—don’t allow you to supply the pull secret using the custom values YAML.To patch the default service account for your namespace and add the pull secret, run the following:\
) or reverse the order of single and double quotes:<internal-private-secret>
with the name of the secret you created in the steps above.customcerts.yaml
file is the example file in these instructions.
EXAMPLE-VALUES-FILE.yaml
with your previous values file.
init-container
with the name import-certs
.
.crt
file in $fusion_home/apps/jetty/connectors/etc/yourcertname.crt
:$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
.crt`` file in
$fusion_home\apps\jetty\connectors\etc\yourcertname.crt“:customize_fusion_values.sh
script, run it using BASH:customize_fusion_values.sh
script with the --prometheus true
option. This creates an extra custom values YAML file for installing Prometheus and Grafana, <provider>_<cluster>_<namespace>_monitoring_values.yaml
. For example: gke_search_f5_monitoring_values.yaml
.install_prom.sh
script to install Prometheus & Grafana in your namespace. Include the provider, cluster name, namespace, and helm release as in the example below:
--help
parameter to see script usage details.install_prom.sh
script.Fusion 5 Upgrades
Deployment type | Platform |
---|---|
Azure Kubernetes Service (AKS) | aks |
Amazon Elastic Kubernetes Service (EKS) | eks |
Google Kubernetes Engine (GKE) | gke |
<platform>_<cluster>_<release>_upgrade_fusion.sh
upgrade script file for editing.CHART_VERSION
to your target Fusion version, and save your changes.<platform>_<cluster>_<release>_upgrade_fusion.sh
script. The <release>
value is the same as your namespace, unless you overrode the default value using the -r
option.kubectl get pods
to see the changes applied to your cluster. It may take several minutes to perform the upgrade, as new Docker images are pulled from DockerHub. To see the versions of running pods, do:RollingUpdate
update policy:OnDelete
to avoid changing critical stateful pods in the Fusion deployment. To apply changes to Zookeeper after performing the upgrade (uncommon), you need to manually delete the pods. For example:updateStrategy
under the zookeeper
section in your "${MY_VALUES}"
file:setup_f5_<platform>.sh
script that matches your Kubernetes platform.
--upgrade
option.
--dry-run
option to the script.customize_fusion_values.sh
script. The upgrade script hard-codes the parameters and eases the need to remember which parameters to pass to the script. This is helpful when working with multiple K8s clusters. Make sure you check the script into version control alongside your custom values YAML files.Whenever you change the custom values YAML files for your cluster, you need to run the upgrade script to apply the changes. The script calls helm upgrade
with the correct parameters and --values
options.helm upgrade
without passing the custom values YAML files, the deployment will revert to using chart defaults, which you never want to do.kubeconfig
is pointing to the correct cluster and you’re using Heml v3. If not, the upgrade fails. Select the correct kubeconfig
before running the script.