ZooKeeper Import/Export API

The ZooKeeper Import/Export API provides methods to upload or download information from Fusion’s ZooKeeper service. This service provides an alternative to the ZooKeeper clients zkCli.sh and zk-shell which are part of the Apache Zookeeper distribution included as part of the Fusion distribution. It was introduced as part of the Fusion 2.0 release.

The ZKImportExport service may be used to export ZooKeeper data for any Fusion release. It can be used to import configuration data into the ZooKeeper service for a new or existing Fusion deployment. Note that since the Fusion 3.0 release all ZooKeeper paths vary according to the version of Fusion that you are running.

  • For details on using this script during the Fusion upgrade procedure, see Upgrading Fusion.

  • For details on using this script to migrate Fusion configurations from one deployment to another, see Migrating Fusion data.

The REST API only supports requests to export ZooKeeper configurations. The Fusion distribution includes a utility script zkImportExport.sh which can be used to import ZooKeeper configuration as well as to export it from arbitrary Fusion instances.


Apache ZooKeeper is a distributed configuration service, synchronization service, and naming registry. Fusion uses ZooKeeper to configure and manage all Fusion components in a single Fusion deployment.

  • znode: ZooKeeper data is organized into a hierarchal name space of data nodes called znodes. A znode can have data associated with it as well as child znodes. The data in a znode is stored in a binary format, but it is possible to import, export, and view this information as JSON data. Paths to znodes are always expressed as canonical, absolute, slash-separated paths; there are no relative reference.

  • ephemeral nodes: An ephemeral node is a znode which exists only for the duration of an active session. When the session ends the znode is deleted. An ephemeral znode cannot have children.

  • server: A ZooKeeper service consists of one or more machines; each machine is a server which runs in its own JVM and listens on its own set of ports. For testing, you can run several ZooKeeper servers at once on a single workstation by configuring the ports for each server.

  • quorum: A quorum is a set of ZooKeeper servers. It must be an odd number. For most deployments, only 3 servers are required.

  • client: A client is any host or process which uses a ZooKeeper service.

See the official ZooKeeper documentation for details about using and managing a ZooKeeper service.

Utility script zkImportExport.sh

This script is located in the top-level Fusion scripts directory. The script takes the following command-line arguments:

-c,--cmd <arg>        Command, one of: 'export', 'import', 'update', 'delete'.

-e,--encode <arg>     Type of encoding for znodes. Valid options:
                       'none', 'utf-8', 'base64', default is 'base64'.
                       Option 'none' will not return any data from the znodes.

-ep,--exclude <arg>   Exclude znode paths, followed by list of paths.
                      Can only be used to exclude nodes one level below the root node.

-eph,--ephemeral      Include ephemeral nodes while exporting znodes, boolean, default false.

-f,--filename <arg>   Name of file containing import/export data.

-h,--help             Display help page.

-ip,--include <arg>   Include znode paths to include, followed by a list of paths.
                      Can only be used to include nodes one level below the root node.

-nr,--non-recursive   Do not perform recursive operations on znodes.

-o,--overwrite        Overwrite data for existing znodes. Valid only with 'update' command.

-p,--path <arg>       Path from ZooKeeper root node, e.g. '/lucid/query-pipelines'.

-r,--recursive        Perform recursive operations on znodes.

-z,--zkhost <arg>     ZooKeeper Connect string, required.

Required arguments are: * -c, --cmd : operation to perform. * -z, --zkhost : the ZooKeeper Connect string.


Export all data from a local single-node ZooKeeper service, save data to a file:

zkImportExport.sh -zkhost localhost:9983 -cmd export -path / -filename znode_dump.json

Export all Fusion configurations from a local single-node ZooKeeper service, save data to a file:

zkImportExport.sh -zkhost localhost:9983 -cmd export -path /lwfusion -filename znode_lucid_dump.json

Export Fusion user databases, groups, roles, and realms configurations from a local single-node ZooKeeper service, save data to a file:

zkImportExport.sh -zkhost localhost:9983 -cmd export -path /lwfusion/3.1.2/proxy/user -filename znode_lucid_admin_dump.json

Initial import of saved Fusion configuration into a new ZooKeeper:

zkImportExport.sh -zkhost localhost:9983 -cmd import -filename znode_lucid_dump.json

Note that the above command will fail if there is conflict between existing znode structures or contents between the ZooKeeper service and the dump file.

Update information for Fusion’s ZooKeeper service:

zkImportExport.sh -zkhost localhost:9983 -cmd update -filename znode_lucid_dump.json

Remove a znode from Fusion’s ZooKeeper service:

zkImportExport.sh -zkhost localhost:9983 -cmd delete -path /lwfusion/test

Fusion REST API service ZKImportExport

The Fusion REST API can only be used to download information from ZooKeeper, via the 'GET' method with the following configuration:

  "zk-import-export::v1" : {
    "name" : "com.lucidworks.apollo.resources.ZKImportExportResource",
    "uri" : "/zk/export",
    "methods" : [ {
      "uri" : "/zk/export/{path:.*}",
      "name" : "getNodeInfo",
      "verb" : "GET",
      "pathParams" : [ {
        "name" : "path",
        "type" : "String"
      } ],
      "queryParams" : [ {
        "name" : "recursive",
        "type" : "Boolean"
      }, {
        "name" : "excludePaths",
        "type" : "List"
      }, {
        "name" : "includePaths",
        "type" : "List"
      }, {
        "name" : "encodeValues",
        "type" : "String"
      } ],
      "hasEntity" : false
    } ]

GET data from path `/lwfusion/3.1.2/core/query-pipelines`

> curl -u user:pass -X GET http://localhost:8764/api/apollo/zk/export/lwfusion/3.1.2/core/query-pipelines

  "path" : "/lwfusion/3.1.2/core/query-pipelines",
  "children" : [ {
    "path" : "/lwfusion/3.1.2/core/query-pipelines/default",
  } ],
  "data" : ""

Get info for node /lwfusion/3.1.2/core/query-pipelines. Do not expand the znodes

curl -u user:pass -X GET http://localhost:8764/api/apollo/zk/export/lwfusion/3.1.2/core/query-pipelines?recursive=true&encodeValues=none
  "path" : "/lwfusion/3.1.2/core/query-pipelines",
  "children" : [ {
    "path" : "/lwfusion/3.1.2/core/query-pipelines/default"
  } ]

Get info for node /lwfusion/3.1.2/core/query-pipelines. encode in `utf-8`

curl -u user:pass -X GET http://localhost:8764/api/apollo/zk/export/lwfusion/3.1.2/core/query-pipelines?recursive=true&encodeValues=utf-8
  "path" : "/lwfusion/3.1.2/core/query-pipelines",
  "children" : [ {
    "path" : "/lwfusion/3.1.2/core/query-pipelines",
    "data" : "{\n  \"id\" : \"default\",\n  \"stages\" : [ {\n    \"type\" : \"search-fields\",\n    \"id\" : \"3756b5d7-cc00-4002-bb9d-54364875c282\",\n    \"rows\" : 10,\n    \"start\" : 0,\n    \"skip\" : false,\n    \"label\" : \"search-fields\",\n    \"type\" : \"search-fields\"\n  }, {\n    \"type\" : \"facet\",\n    \"id\" : \"711fec56-734f-4ba0-9f55-0e48be659e3e\",\n    \"skip\" : false,\n    \"label\" : \"facet\",\n    \"type\" : \"facet\"\n  }, {\n    \"type\" : \"solr-query\",\n    \"id\" : \"60455b56-7d7c-46cb-b7d3-219e57e71cc3\",\n    \"httpMethod\" : \"POST\",\n    \"skip\" : false,\n    \"label\" : \"solr-query\",\n    \"type\" : \"solr-query\"\n  } ]\n}"
  } ],
  "data" : ""

Get info for node /lwfusion/3.1.2/core/query-pipelines. Exclude path `/lwfusion/3.1.2/core/query-pipelines/default`

curl -u user:pass -X GET http://localhost:8764/api/apollo/zk/export/lwfusion/3.1.2/core/query-pipelines?recursive=true&encodeValues=utf-8&excludePaths=/lwfusion/3.1.2/core/query-pipelines/default
  "path" : "/lwfusion/3.1.2/core/query-pipelines",
  "data" : ""