Looking for the old docs site? You can still view it for a limited time here.

Configure Replicas and Horizontal Pod Auto-Scaling

You can configure multiple replicas and horizontal pod autoscalers (tied to CPU usage) for Fusion components.

If you used the --with-replicas option when running the ./customize_fusion_values.sh script, then you already have replicas configured for your cluster.

If not, then copy the example file (example-values/replicas.yaml) and rename it using our convention: <provider>_<cluster>_<release>_fusion_replicas.yaml

Append the following to your upgrade script:

MY_VALUES="${MY_VALUES} --values gke_search_f5_fusion_replicas.yaml"

Tune Fusion Application Performance

In this section, we cover a variety of topics to help you get the best Search performance for your Fusion application.

If you have not created an application yet, proceed to the Fusion Admin UI to create your first application. For the purposes of this section, we’ll use a sample application named dcommerce.

Fix Solr Collection Replica Placement

If you’re using multiple Solr StatefulSets, such as to partition Solr pods into search, analytics, and system pools, then you need to use a Solr auto-scaling policy to govern replica placement for Fusion collections.

Open a port-forward to a Solr pod in the cluster.

kubectl port-forward <SOLR_POD_ID> 8983

Inspect the Solr auto-scaling policy in the policy.json file. The syntax is rather cryptic, but it basically defines a separate policy for search, analytics, and system oriented collections.

Run the ./update_policy.sh script to add the Solr auto-scaling policy from policy.json into the Solr cluster.

Unfortunately, due to a limitation in Solr (https://issues.apache.org/jira/browse/SOLR-14347), replicas do not get placed correctly for Solr collections created by Fusion during application creation.

Consequently, you’ll need to delete the Solr collections and re-create them using a BASH script.

The recommended approach is to adapt the update_app_coll_layout.sh script for your application, such as setting the correct number of shards, replicas, replica types, and policy for each collection used by your Fusion application. Make a copy of the update_app_coll_layout.sh script and set the vars at the top for the specific app, in this case dcommerce.

For this example, we’ll use the following settings:

Collection Shards Replicas Policy

dcommerce

1

2 tlog, 3 pull

search

dcommerce_signals_aggr

1

2 tlog, 3 pull

search

dcommerce_query_rewrite

1

2 tlog, 3 pull

search

dcommerce_user_prefs

1

2 nrt

search

dcommerce_signals

3

2 nrt

analytics

dcommerce_query_rewrite_staging

1

2 nrt

analytics

dcommerce_job_reports

1

2 nrt

analytics

Here’s an example for our dcommerce app, adjust to meet your specific use case:

#!/bin/bash

APP="dcommerce"
SOLR="http://localhost:8983"

curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_signals"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_signals_aggr"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_query_rewrite_staging"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_query_rewrite"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_job_reports"
curl "$SOLR/solr/admin/collections?action=DELETE&name=${APP}_user_prefs"

# analytics oriented collections
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_signals&collection.configName=${APP}_signals&numShards=3&replicationFactor=2&policy=analytics&maxShardsPerNode=2"
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_query_rewrite_staging&collection.configName=${APP}_query_rewrite_staging&numShards=1&replicationFactor=2&policy=analytics"
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_job_reports&collection.configName=${APP}_job_reports&numShards=1&replicationFactor=2&policy=analytics"

# search oriented collections
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}&collection.configName=${APP}&numShards=1&tlogReplicas=2&pullReplicas=3&policy=search"
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_signals_aggr&collection.configName=${APP}_signals_aggr&numShards=1&tlogReplicas=2&pullReplicas=3&policy=search"
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_query_rewrite&collection.configName=${APP}_query_rewrite&numShards=1&tlogReplicas=2&pullReplicas=3&policy=search"
curl "$SOLR/solr/admin/collections?action=CREATE&name=${APP}_user_prefs&collection.configName=${APP}_user_prefs&numShards=1&replicationFactor=2&policy=search"

Notice that script deletes Solr collections and re-creates them with the correct auto-scaling policy in place. Obviously, you should not run this on collections that have data without backing up the data first.

Tune Solr Commit Settings

Fusion collections are created with a default commit within set to 10 secs. This overrides the commit settings set for a collection in the solrconfig.xml.

Commit within 10 seconds is too aggressive for production environments as it will cause Solr to open a new search and flush all caches. For environments where optimal performance is important, you may want to disable the commit within setting for your collections and instead rely solely on auto soft and hard commits.

Disable commit within using the update_commit_within_f5.sh script, for instance:

./update_commit_within_f5.sh --collection dcommerce --gateway GATEWAY_URL --commit_within -1

Replace GATEWAY_URL with the URL of the K8s Ingress or IP for the Fusion API Gateway. Repeat this process for all Fusion collections.

Tip
You can get the IP of the Gateway pod using: export LW_K8S_GATEWAY_IP=$(kubectl --namespace ${LW_K8S_NAMESPACE} get service proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Configure soft / hard auto commit settings in solrconfig.xml (via the Fusion Admin UI), such as:

    <autoCommit>
      <maxTime>60000</maxTime>
      <openSearcher>false</openSearcher>
    </autoCommit>

    <autoSoftCommit>
      <maxTime>300000</maxTime>
    </autoSoftCommit>

You want the auto soft-commit setting to be as long as possible (in millis) to avoid re-opening searchers too often, which invalidates your caches.

You should also consider disabling commits / optimize requests coming from external client applications by configuring the IgnoreCommitOptimizeUpdateProcessorFactory in your update processor chain(s).

    <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
      <int name="statusCode">200</int>
      <str name="responseMessage">Thou shall not issue a commit!</str>
    </processor>

This prevents external client applications that you do not control from committing (or optimizing) too often. For most production environments, you should rely solely on the auto-commit settings in solrconfig.xml.

Enable Buffering for Index Pipelines

For each index pipeline, ensure the Buffer Documents and Send Them to Solr in Batches option is enabled for the Solr Index stage.

Tune Solr Cache Settings

Solr has a number of caches, such as the filter cache, that have a major impact on performance. For many production environments, the max size for these caches is too small and should be increased. Be sure to look at the metrics for your caches after running load tests to determine if you need to tune them. Cache configuration is done in the solrconfig.xml for each collection using the Fusion Admin UI.

Typically the three most important caches to tune are:

    <filterCache class="solr.FastLRUCache"
                 size="5000"
                 maxRamMB="64"
                 autowarmCount="0"/>

    <queryResultCache class="solr.LRUCache"
                      size="6000"
                      maxRamMB="250"
                      autowarmCount="0"/>

    <documentCache class="solr.LRUCache"
                   size="25000"
                   maxRamMB="64"
                   autowarmCount="0"/>
Tip
Be careful with autowarmCount as that will impact how long it takes for a new searcher to open.

Query Pipeline Routing Parameters

If you’re using a separate search pool for search oriented collections, then you’ll want to add the lw.nodeFilter=host:solr-search parameter to the main query pipeline(s) to ensure queries get routed from Fusion to Solr Search pods only.

If you’re using PULL replicas for search collections, then you should also pass shards.preference=replica.type:PULL,replica.location:local to Solr.

This ensures that queries get routed to PULL replicas only and favors the local replica if it exists. For more information about shards.preference, see: https://lucene.apache.org/solr/guide/8_4/distributed-requests.html#shards-preference-parameter

You should also provide these parameters for sidecar queries, such as in the tagger, rules, and signals boost stages.

Async Query Stages

The tagger and rules stages can be configured with a max time constraint that enforces an upper bound on how long these stages can take. Behind the scenes, this requires executing the sidecar request in a background thread.

In addition, it’s common to configure your pipeline to do the rules lookup and signals boost concurrently using Fusion asynchronous stage support. If you’re using these features, please ensure you pass the following Java system property:

-Djava.util.concurrent.ForkJoinPool.common.parallelism=1

Use Gatling to run query performance / load tests

Lucidworks recommends running query performance tests to establish a baseline number of pods for the proxy, query pipeline, and Solr services. You can use the gatling-qps project provided in the fusion-cloud-native repo as a starting point for building a query load test. Gatling.io is a load test framework that provides a powerful Scala-based DSL for constructing performance test scenarios. See FusionQueryTraffic.scala in the repo as a starting point for building query performance tests for Fusion 5.

Register warming queries

To avoid any potential delays when a new query pod joins the cluster, such as in reaction to an HPA auto-scaling trigger, we recommend registering a small set of queries to "warm up" the query pipeline service before it gets added to the Kubernetes service. In the query-pipeline section of the custom values YAML, configure your warming queries using the structure shown in the example below:

warmingQueryJson:
  {
  "pipelines": [
    {
      "pipeline": "<PIPELINE>",
      "collection": "<COLLECTION>",
      "params": {
        "q": ["*:*"]
      }
    },{
      "method" : "POST",
      "pipeline": "<ANOTHER_PIPELINE>",
      "collection": "<ANOTHER_COLL>",
      "params": {
        "q": ["*:*"]
      }
    }
  ],
  "profiles": [
    {
      "profile": "<PROFILE>",
      "params": {
        "q": ["*:*"]
      }
    }
  ]
  }
Note
The indentation for the opening / closing braces is important for embedding JSON in YAML