Install a Fusion Cluster (Unix)

This article describes how to install a Fusion cluster on multiple Unix nodes. Instructions are given for each of the cluster arrangements described in Deployment Types.

Preliminary steps

Before proceeding to one of the sections that follow, perform these steps:

To prepare for setting up a Fusion cluster
  1. Prepare your firewall so that the Fusion nodes can communicate with each other. The default ports list contains a list of all ports used by Fusion. From this list, it is important that the ZooKeeper ports, Apache Ignite ports, and the Spark ports (if you are using Spark) are open between the different nodes for cross-cluster communication.

    If you plan to use an external SolrCloud cluster and/or an external ZooKeeper cluster, then also prepare your firewall so that Fusion nodes can communicate with the SolrCloud and ZooKeeper nodes.

  2. Fusion for Unix is distributed as a compressed archive file (.tar.gz) Download the Fusion compressed archive file to each node that will run Fusion.

    Note
    To leverage the copies of Solr and/or ZooKeeper that are distributed with Fusion on nodes that will not run Fusion (as a simple means of obtaining compatible versions of the other software), also download the Fusion compressed archive file to each of those nodes. Below, you will edit configuration files so that Fusion doesn’t run on those nodes.
  3. On each node, change your working directory to the directory in which you placed the Fusion tar/zip file and unpack the archive, for example:

    $ cd /opt/lucidworks
    $ tar -xf fusion-version.x.tar.gz

    The resulting directory is named fusion/4.1.x. You can rename this if you wish. This directory is considered your Fusion home directory. See Directories, files, and ports for the contents of the fusion/4.1.x directory.

Important
In the sections that follow, for every step on multiple nodes, complete the step on all nodes before going to the next step. It is especially important that you don’t start Fusion on any node until the instructions say to do so.

In the steps below, the port numbers reflect default port numbers and one common choice (port 2181 for nodes in an external ZooKeeper cluster). Port numbers for your nodes might differ.

Nodes running core Fusion services and Solr also run ZooKeeper

In this cluster arrangement, a ZooKeeper cluster runs on the same nodes that run core Fusion services and Solr.

Fusion cluster arrangement 1

To set up a Fusion cluster

Perform the steps in the section Preliminary steps, and then perform these steps:

  1. Assign a number to each Fusion node, starting at 1. We refer to the number we assign to each node as the ZooKeeper myid.

  2. On each Fusion node, create a fusion/4.1.x/data/zookeeper directory, and a file called myid in that directory. Edit the file and save the ZooKeeper myid assigned for this node as the only contents.

  3. On each Fusion node, open the fusion/4.1.x/conf/zookeeper/zoo.cfg file in a text editor and add the following after the clientPort line (change the hostnames or IP addresses to the correct ones for your servers):

    server.1=[Hostname/IP for ZooKeeper with myid 1]:2888:3888
    server.2=[Hostname/IP for ZooKeeper with myid 2]:2888:3888
    server.3=[Hostname/IP for ZooKeeper with myid 3]:2888:3888
    Note
    Don’t use localhost or 127.0.0.1 as the hostname/IP. Specify the hostname/IP that other nodes will use when communicating with the current node.
  4. On each Fusion node, edit default.zk.connect in fusion/4.1.x/conf/fusion.properties to point to the ZooKeeper hosts:

    default.zk.connect=[ZK host 1]:9983,[ZK host 2]:9983,[ZK host 3]:9983
  5. On each node, start ZooKeeper with bin/zookeeper start. Zookeeper should start without errors. If a ZooKeeper instance fails to start, check the log at fusion/4.1.x/var/log/zookeeper/zookeeper.log.

  6. On each node, start the rest of Fusion using bin/fusion start.

  7. Create an admin password and log in to Fusion at http://FIRST_NODE_IP:8764, where FIRST_NODE_IP is the IP address of your first Fusion node.

  8. Verify the Solr cluster is healthy by looking at http://ANY_NODE_IP:8983/solr/#/~cloud, where ANY_NODE_IP is the IP address of a Solr node. All of the nodes should appear green.

  9. If necessary, prepare high availability by setting up a load balancer in front of Fusion so that it load balances between the Fusion UI URL’s at http://NODE_IP:8764.

    Consult your load balancer’s documentation for instructions.

Nodes running ZooKeeper aren’t running core Fusion services or Solr

In this cluster arrangement, the ZooKeeper cluster runs on nodes in the Fusion cluster on which core Fusion services and Solr aren’t running.

Each node in the Fusion cluster has Fusion and Solr installed. ZooKeeper runs on Fusion cluster nodes on which neither Fusion nor Solr is running.

Fusion cluster arrangement 2

To set up a Fusion cluster

Perform the steps in the section Preliminary steps, and then perform these steps:

  1. Edit conf/fusion.properties and remove zookeeper from the group.default list. This will make it so that ZooKeeper doesn’t start when you start Fusion.

  2. On each Fusion node, edit default.zk.connect in fusion/4.1.x/conf/fusion.properties to point to the ZooKeeper hosts:

    default.zk.connect=[ZK host 1]:2181,[ZK host 2]:2181,[ZK host 3]:2181
  3. On each node, start ZooKeeper with bin/zookeeper start. Zookeeper should start without errors. If a ZooKeeper instance fails to start, check the log at fusion/4.1.x/var/log/zookeeper/zookeeper.log.

  4. On each node, start the rest of Fusion using bin/fusion start.

  5. Create an admin password and log in to Fusion at http://FIRST_NODE_IP:8764, where FIRST_NODE_IP is the IP address of your first Fusion node.

  6. Verify the Solr cluster is healthy by looking at http://ANY_NODE_IP:8983/solr/#/~cloud, where ANY_NODE_IP is the IP address of a Solr node. All of the nodes should appear green.

  7. If necessary, prepare high availability by setting up a load balancer in front of Fusion so that it load balances between the Fusion UI URL’s at http://NODE_IP:8764.

    Consult your load balancer’s documentation for instructions.