Product Selector

Fusion 5.12
    Fusion 5.12

    Create and Run a SQL Aggregation Job

    You can perform a SQL aggregation on a signals collection for a datasource (or on some other collection), through the Managed Fusion UI or using the Managed Fusion API.

    Preliminaries

    Before you can create and run a SQL aggregation job, you must create an app, create a collection (or use the default collection for the app), and add a datasource. Before you run a SQL aggregation job, you need signal data. Otherwise, there is nothing to aggregate.

    Use the Managed Fusion UI

    You can use the Managed Fusion UI to perform a SQL aggregation.

    Set up a SQL aggregation job
    1. In the Managed Fusion application, navigate to Collections > Jobs.

    2. Click Add and select SQL Aggregation from the dropdown list.

    3. Specify an arbitrary Spark Job ID.

    4. For the Source Collection, select the collection that contains the data to aggregate.

      This is not the base collection. For example, to aggregate the signals in experiment_signals, you would select experiment_signals, not experiment.
    5. Click the SQL field to enter aggregation statements. Click Close to close the dialog box.

      Select the Advanced option in the top right corner to enter detailed parameters. See SQL Aggregation Jobs for more information.
    6. Enter applicable Signal Types.

    7. Enter the Data Format.

    8. Save the job.

    Run a SQL aggregation job
    1. With the app open, navigate to Collections > Jobs.

    2. In the list of jobs, click the job you want to run, and then click Run.

      If you do not want to run the job manually, you can also Schedule a Job.

    Use the Managed Fusion API

    You can use the Managed Fusion API to perform a SQL aggregation. For example:

    Configure a SQL aggregation job:
    curl ":API_PORT/api/spark/configurations/experiment_signals_aggregation"
    {
      "type" : "spark",
      "id" : "experiment_click_signals_aggregation",
      "definition" : {
        "id" : "experiment_click_signals_aggregation",
        "sql" : "SELECT SUM(count_i) AS aggr_count_i, query AS query_s, doc_id AS doc_id_s, time_decay(count_i, timestamp_tdt) AS weight_d FROM default_signals WHERE type_s='click' GROUP BY query, doc_id",
        "selectQuery" : "*:*",
        "outputPipeline" : "_system",
        "rollupAggregator" : "SQL",
        "sourceRemove" : false,
        "sourceCatchup" : true,
        "outputRollup" : true,
        "parameters" : [ {
          "key" : "optimizeOutput",
          "value" : "4"
        } ]
      },
      "inputCollection" : "experiment_click_signals",
      "rows" : 10000
    }
    Run a SQL aggregation job:
    curl -u USERNAME:PASSWORD -X POST -H "Content-Type: application/json" https://EXAMPLE_COMPANY.b.lucidworks.cloud:API_PORT/api/jobs/spark:experiment_click_signals_aggregation/actions -d '{"action": "start"}'