Scale a Fusion 4.x Cluster

To scale a Fusion cluster, you can add new Fusion nodes, add new dedicated indexing nodes, or move Fusion to new nodes.

Adding a new Fusion node to an existing cluster

Follow these steps to add a new node to an existing Fusion cluster:

If you’re running embedded zookeepers from Fusion in an ensemble for your cluster, ensure that the you’re running an odd number of zookeepers for your environment after addition of the new node.
  1. Stop Fusion on all nodes in the cluster.

    This ensures that there is no data inconsistency between the instances when the new node comes up.

  2. Decompress the new copy of Fusion and place it in the desired directory.

  3. Configure the fusion.cors ( in Fusion 4.x) file to match your requirements.

    If you will also run the embedded ZooKeeper, add the new node’s IP/hostname and port to the default.zk.connect string and copy this change to all other instances in your cluster. Configure the memory and other JVM options for the Fusion modules, then save the file.

  4. If embedded ZooKeepers are used in your cluster and you intend to start ZooKeeper on this node, then follow the additional steps below. If not, then you are ready to start all nodes in the cluster.

    1. Copy the $FUSION_HOME/conf/zookeeper/zoo.cfg file from one of the existing nodes to the new node, overwriting the default file.

    2. Add the entry for the new ZooKeeper to the server list in the zoo.cfg file.

      The entry format is server.x=IP:port:port. For example, if this is the 5th node, then the new entry in zoo.cfg is server.5=IP:port:port.

    3. Create a zookeeper folder under $FUSION_HOME/data.

    4. Create a new myid file in $FUSION_HOME/data/zookeeper.

      The contents of this file must be an integer equal to the number of the new ZooKeeper node in the ensemble. For example, if the new node will be the 5th node in your ZooKeeper ensemble, then the myid file should contain the value "5".

    5. Copy the $FUSION_HOME/data/zookeeper/version-2 directory from one of the existing nodes to the new node, overwriting the default directory.

    6. Modify the connect string for the default search cluster:

      1. Start ZooKeeper on all nodes.

        Next, you will need the zkcli script, located in $FUSION_HOME/apps/solr-dist/server/scripts/cloud-scripts. Use for Unix or zkcli.bat for Windows. The examples below use the Unix script.

      2. Download the default search cluster file:

        ./ -z <zk1>:<port1>,<zk2>:<port2>,... -cmd getfile <path_to_default_cluster> <path_to_dump_file>.json

        The path will differ depending on your Fusion version:

        • 2.4.x: /lucid/search-clusters/default

        • 3.x: /lwfusion/<fusion_version>/core/search-clusters/default

        For example:

        ./ -z localhost:9983 -cmd getfile /lwfusion/3.1.2/core/search-clusters/default default_search_cluster.json
      3. In the downloaded JSON file, find the connectString key and replace the old IP value with the IP of the new Fusion node.

        Be sure to specify the chroot if your cluster is configured to use it.

        For example:

          "id" : "default",
          "connectString" : "localhost:9983/lwfusion/3.1.2/solr",
          "zkClientTimeout" : 30000,
          "zkConnectTimeout" : 60000,
          "cloud" : true,
          "bufferFlushInterval" : 1000,
          "bufferSize" : 100,
          "concurrency" : 10,
          "authConfig" : {
        	"authType" : "none"
          "validateCluster" : true
      4. Upload the modified search cluster file:

        ./ -z <zk1>:<port1>,<zk2>:<port2>,... -cmd putfile <path_to_default_cluster> <path_to_dump_file>.json

        For example:

        ./ -z localhost:9983 -cmd putfile /lwfusion/3.1.2/core/search-clusters/default default_search_cluster.json
  5. Start Fusion on all nodes in the cluster.

Adding an indexing node to a Fusion cluster

If you need more capacity for indexing, you can add nodes dedicated to indexing. To do this, you add a new Fusion node, configure it to only run the Solr service, then allocate replicas of your collections to the new node.

  1. Install the Fusion package on the new node.

  2. Edit fusion.cors ( in Fusion 4.x) as follows:

    1. Edit group.default to include only the Solr service.

      For example, change

      group.default = zookeeper, solr, api, connectors-rpc, connectors-classic, admin-ui, proxy, webapps


      group.default = solr
    2. Uncomment default.zk.connect and point it to the cluster’s ZooKeeper instances.

      For example, change

      # default.zk.connect = localhost:9983


      default.zk.connect =,,
    3. Save the file.

  3. Start Fusion on the new node:

    bin/fusion start

    At this point, the new node is added to the cluster. No indexing takes place on the new node yet.

  4. Allocate one or more collection replicas to this node:

    1. Open the Solr UI at http://<new-node-hostname>:8983/solr/.

    2. Click Collections

    3. Select a collection to replicate on the new indexing node.

    4. Click Shard: shard1 (or another shard if you have more than one for this collection).

    5. Click add replica.

    6. From the Node drop-down list, select the new node.

    7. Click Create Replica.

      To verify that the collection is being replicated, you can click Cloud and view the replicas.

Consider whether secondary collections should also be replicated. For example, consider adding replicas for the signals and aggregations collections associated with the main collections that you are replicating.

Moving Fusion from one node to another

  1. Stop Fusion on all nodes in the cluster.

    This ensures that there is no data inconsistency between the instances when the new node comes up.

  2. Compress the Fusion node you wish to move.

  3. Copy the compressed file to the destination.

  4. Starting with step 2, follow the instructions above for adding a new node.