Other Ingestion Methods

Usually, the simplest way to get data into Fusion is through its connectors. However, in some cases it makes sense to use other methods:

  • Import with the REST API

    You can use the REST API to bypass the connectors and parsers and push documents directly to an index profile or index pipeline.

  • Import via Pig

    You can use Pig to import data into Fusion, using the {packageUser}-pig-functions-{connectorVersion}.jar file found in $FUSION_HOME/apps/connectors/resources/lucid.hadoop/jobs.

  • Import via Hive

    Fusion ships with a Serializer/Deserializer (SerDe) for Hive, included in the distribution as {packageUser}-hive-serde-{connectorVersion}.jar in $FUSION_HOME/apps/connectors/resources/lucid.hadoop/jobs.

  • Parallel bulk loader

    This method is available with a Fusion AI license. See the Parallel Bulk Loader topic in the Fusion AI documentation.

  • Batch ingestion of signals is also available with a Fusion AI license.