Configure Pod Affinity
kubernetes.io/hostname
policies:Before | requiredDuringSchedulingIgnoredDuringExecution: |
After | preferredDuringSchedulingIgnoredDuringExecution: |
--with-affinity-rules
option when running the ./customize_fusion_values.sh
script, the pod affinity rules are configured for your cluster. Alternatively, copy affinity.yaml
, and rename it using the following naming convention: <provider>_<cluster>_<release>_fusion_affinity.yaml
.To implement the file, append the following to your upgrade script:solr_zone
system property set to the zone it is running in, such as -Dsolr_zone=us-west1-a
. This guide covers how to use the solr_zone
property to distribute replicas across zones in the Deploy Fusion at Scale section. Setting the solr_zone
property for Solr pods requires the Solr service account to have a ClusterRoleBinding that allows it to get node metadata from the Kubernetes API service.
Deploy Fusion at Scale
setup_f5_*.sh
scripts are handy for getting started and proof-of-concept purposes, this article covers the planning process for building a production-ready environment.gcloud
or aws
, and kubectl
.
See the platform-specific instructions linked above, or check with your cloud provider.-c
arg)fusion-cloud-native
repository: git clone https://github.com/lucidworks/fusion-cloud-native
customize_fusion_values.sh
script.
--help
parameter to see script usage details.File | Description |
---|---|
<provider>_<cluster>_<namespace>_fusion_values.yaml | Main custom values YAML used to override Helm chart defaults for Fusion microservices. |
<provider>_<cluster>_<namespace>_monitoring_values.yaml | Custom values yaml used to configure Prometheus and Grafana. |
<provider>_<cluster>_<namespace>_fusion_resources.yaml | Resource requests and limits for all Microservices. |
<provider>_<cluster>_<namespace>_fusion_affinity.yaml | Pod affinity rules to ensure mulitple replicas for a single service are evenly distributed across zones and nodes. |
<provider>_<cluster>_<namespace>_upgrade_fusion.sh | Script used to install and/or upgrade Fusion using the aforementioned custom values YAML files. |
<provider>_<cluster>_<release>_fusion_values.yaml
file to familiarize yourself with its structure and contents. Notice it contains a separate section for each of the Fusion microservices. The example configuration of the query-pipeline
service below illustrates some important concepts about the custom values YAML file.
<provider>_<cluster>_<namespace>_fusion_values.yaml
. For example, gke_search_f5_fusion_values.yaml
.Parameter | Description |
---|---|
<provider> | The K8s platform you’re running on, such as gke . |
<cluster> | The name of your cluster. |
<namespace> | The K8s namespace where you want to install Fusion. |
<node_selector> | Specifies a nodeSelector label to find nodes to schedule Fusion pods on. |
--node-pool <node_selector>
label is very important. Using the wrong value will cause your pods to be stuck in the pending
state. If you’re not sure about the correct value for your cluster, pass ’’` to let Kubernetes decide which nodes to schedule Fusion pods on.nodeSelector
labels are provider-specific. The fusion-cloud-native
scripts use the following defaults for GKE and EKS:Provider | Default node selector |
---|---|
GKE | cloud.google.com/gke-nodepool: default-pool |
EKS | alpha.eksctl.io/nodegroup-name: standard-workers |
values.yaml
file to avoid a known issue that prevents the kuberay-operator
pod from launching successfully: yaml kuberay-operator: crd: create: true
Flag | Description |
---|---|
--node-pool | Add a Fusion specific label to your nodes. |
--with-resource-limits | Configure resource requests/limits. |
--with-replicas | Configure replica counts. |
--with-affinity-rules | Configure pod affinity rules for Fusion services. |
--node-pool
to add a Fusion specific label to your nodes by doing:--node-pool 'fusion_node_type: <NODE_LABEL>'
.storageClass.yaml
with the following contents:nodePools
property. If any property for that statefulset needs to be changed from the default set of values, then it can be set directly on the object representing the node pool, any properties that are omitted are defaulted to the base value. See the following example (additional whitespace added for display purposes only):""
is the suffix for the default partition.fusion_node_type=analytics
. You can use the fusion_node_type
property in Solr auto-scaling policies to govern replica placement during collection creation.fusion_node_type=search
.nodePools
section above.nodePools
value ""
.replicaCount
, or number of Solr pods, is six. The search partition replicaCount
is twelve.Each nodePool is automatically be assigned the -Dfusion_node_type property of <search>
, <system>
, or <analytics>
. This value matches the name of the nodePool. For example, -Dfusion_node_type=<search>
.The Solr pods have a fusion_node_type
system property, as shown below:--set global.networkPolicyEnabled=true
when installing the Fusion Helm chart.envoy
.envoy
. You need at least 100GB of free disk for Docker.envoy
’s local registry. For example, to pull the query pipeline image, run docker pull lucidworks/query-pipeline:5.9.0
. See docker pull --help
for more information about pulling Docker images.envoy
to the private Docker registry, most likely via a VPN connection. In this example, the private Docker registry is referred to as <internal-private-registry>
.envoy
’s Docker registry to the private registry. This will take a long time.
imagePullSecrets
setting using custom values YAML. However, other 3rd party services—including Zookeeper, Pulsar, Prometheus, and Grafana—don’t allow you to supply the pull secret using the custom values YAML.To patch the default service account for your namespace and add the pull secret, run the following:\
) or reverse the order of single and double quotes:<internal-private-secret>
with the name of the secret you created in the steps above.customcerts.yaml
file is the example file in these instructions.
EXAMPLE-VALUES-FILE.yaml
with your previous values file.
init-container
with the name import-certs
.
.crt
file in $fusion_home/apps/jetty/connectors/etc/yourcertname.crt
:$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
$fusion_home/apps/jetty/connectors/etc/yourcertname.crt
.crt`` file in
$fusion_home\apps\jetty\connectors\etc\yourcertname.crt“:customize_fusion_values.sh
script, run it using BASH:customize_fusion_values.sh
script with the --prometheus true
option. This creates an extra custom values YAML file for installing Prometheus and Grafana, <provider>_<cluster>_<namespace>_monitoring_values.yaml
. For example: gke_search_f5_monitoring_values.yaml
.install_prom.sh
script to install Prometheus & Grafana in your namespace. Include the provider, cluster name, namespace, and helm release as in the example below:
--help
parameter to see script usage details.install_prom.sh
script.