Built-in SQL aggregation jobs
Enable or Disable Signals automatically creates the necessary_signals
and _signals_aggr
collections, plus several Parameterized SQL Aggregation jobs for signal processing and aggregation:
Enable or Disable Signals
Enable or Disable Signals
You can enable and disable signals using the Fusion UI or the REST API.Enable signals for a collectionDisable signals for a collection
When you disable signals, the aggregation jobs are deleted, but the
_signals
and _signals_aggr
collections are not, your legacy signal data remains intact.Using the UI
When you create a collection using the Fusion UI, signals are enabled and a signals collection created by default. You can also enable and disable signals for existing collections using the Collections Manager.Enable signals for a collection- In the Fusion workspace, navigate to Collections > Collections Manager.
- Hover over the primary collection for which you want to enable signals.
-
Click
Configure to open the drop-down menu.
-
Click Enable Signals.
The Enable Signals window appears, with a list of collections and jobs that are created when you enable signals.
- Click Enable Signals.
- In the Fusion workspace, navigate to Collections > Collections Manager.
- Hover over the primary collection for which you want to disable signals.
-
Click
Configure to open the drop-down menu.
- Click Disable Signals. The Disable Signals window appears, with a list of jobs that are created when you enable signals.
-
Click Disable Signals.
Your
_signals
and_signals_aggr
collections remain intact so that you can access your legacy signals data.
Using the Collection Features API
Using the API, the/collections/{collection}/features/{feature}
endpoint enables or disables signals for any collection:Check whether signals are enabled for a collectionJob | Default input collection | Default output collection | Default schedule |
---|---|---|---|
COLLECTION_NAME_click_signals_aggregation | COLLECTION_NAME_signals | COLLECTION_NAME_signals_aggr | Every 15 minutes |
COLLECTION_NAME_session_rollup | COLLECTION_NAME_signals | COLLECTION_NAME_signals | Every 15 minutes |
COLLECTION_NAME_user_item_preferences_aggregation | COLLECTION_NAME_signals | COLLECTION_NAME_signals_aggr | Once per day |
COLLECTION_NAME_user_query_history_aggregation | COLLECTION_NAME_signals | COLLECTION_NAME_signals_aggr | Once per day |
COLLECTION_NAME_click_signals_aggregation
TheCOLLECTION_NAME_click_signals_aggregation
job computes a time-decayed weight for each document, query, and filters group in the signals collection. Fusion computes the weight for each group using an exponential time-decay on signal count (30 day half-life) and a weighted sum based on the signal type. This approach gives more weight to a signal that represents a user purchasing an item than to a user just clicking on an item.
You can customize the signal types and weights for this job by changing the signalTypeWeights
SQL parameter in the Fusion Admin UI.

signalTypeWeights
parameter into a WHERE IN
clause to filter signals by the specified types (click, cart, purchase), and also passes the parameter into the weighted_sum
SQL function. Notice that Fusion only displays the SQL parameters and not the actual SQL for this job. This is to simplify the configuration because, in most cases, you only need to change the parameters and not worry about the actual SQL. However, if you need to change the SQL for this job, you can edit it under the Advanced toggle on the form.
A user can configure the
COLLECTION_NAME_click_signals_aggregation
job to use a parquet file as the source of raw signals instead of a signal Fusion collection.-
Use catalog api to set up a “catalog project” in Fusion:
-
Create an assets table in the project created in previous step:
The parquet file listed above needs to have all the fields in the SQL script which the
COLLECTION_NAME_click_signals_aggregation
job is selecting/using.-
In the
COLLECTION_NAME_click_signals_aggregation
job, change “source” from${collection}_signals
tocatalog:${project_name}.${asset_name}
(e.g.catalog:fusion_test.doc_test
per the sample code). - Start the job.
COLLECTION_NAME_session_rollup
TheCOLLECTION_NAME_session_rollup
job aggregates related user activity into a session signal that contains activity count, duration, and keywords (based on user search terms). The Fusion App Insights application uses this job to show reports about user sessions. Use the elapsedSecsSinceLastActivity
and elapsedSecsSinceSessionStart
parameters to determine when a user session is considered to be complete. You can edit the SQL using the Advanced toggle.
The COLLECTION_NAME_session_rollup
job uses signals as the input collection and output collection. Unlike other aggregation jobs that write aggregated documents to the COLLECTION_NAME_signals_aggr
collection, the COLLECTION_NAME_session_rollup
job creates session signals and saves them to the COLLECTION_NAME_signals
collection.
COLLECTION_NAME_user_item_preferences_aggregation
TheCOLLECTION_NAME_user_item_preferences_aggregation
job computes an aggregated weight for each user/item combination found in the signals collection. The weight for each group is computed using an exponential time-decay on signal count (30 day half-life) and a weighted sum based on the signal type.
This job is a prerequisite for the ALS recommender job.
- In the job configuration panel, click Advanced to see all of the available options.
- When aggregating signals for the first time, uncheck the Aggregate and Merge with Existing checkbox. In production, once the jobs are running automatically then this box can be checked. Note that if you want to discard older signals then by unchecking this box those old signals will essentially be replaced completely by the new ones.
- If the original signal data has missing fields, edit the SQL query to fill in missing values for fields such as “count_i” (the number of times a user interacted with an item in a session).
- Sometimes the aggregation job can run faster by unchecking the Job Skip Check Enabled box. Do this when first loading the signals.
-
Use the
signalTypeWeights
SQL parameter to set the correct signal types and weights for your dataset. Its value is a comma-delimited list of signal types and their stakeholder-defined level of importance. Think of this numeric value as a weight that tells which type of signal is most important for determining a user’s interest in an item. An example of how to weight the signal types is shown below:Rank your signal types to determine which types should be added. Add only the signal types that are significant. Signal types that are not added to the list will not be included in the aggregation job, and for some signal types this is fine. The weights should be within orders of magnitude of each other. The spread of values should not be wide. For instance,click:1.0, cart:100000.0
is too wide of a spread. The values ofclick:1.0
andcart:50.0
would be a reasonable setting, indicating that the signal type ofcart
is 50 times more important for measuring a user’s interest in an item. -
The Time Range field value is used in a weight decay function that reduces the importance of signals the older they are. This time range is in days and the default is 30 days. If you want to increase this time because the time duration of your signals is greater than 30 days, edit the SQL query to reflect the desired number of days. The SQL query is visible when you click Advanced in the job configuration panel. Modify the following line in the SQL query, changing “30 days” to your desired timeframe:
COLLECTION_NAME_item_recommendations
and scheduled to run after this job completes. Consequently, you should only run this aggregation once or twice a day, because training a recommender model is a complex, long-running job that requires significant resources from your Fusion cluster.
COLLECTION_NAME_user_query_history_aggregation
TheCOLLECTION_NAME_user_query_history_aggregation
job computes an aggregated weight for each user/query combination found in the signals collection. The weight for each group is computed using an exponential time-decay on signal count (30 day half-life) and a weighted sum based on the signal type. Use the signalTypeWeights
parameter to set the correct signal types and weights for your dataset. You can use the results of this job to boost queries for a user based on their past query activity.