products
. It produces the same results as legacy aggregation:
SELECT SUM(count_i) AS aggr_count_i
. count_i
is summed as aggr_count_i
.time_decay(count_i, date) AS weight_d
. The time_decay
function computes the aggregated weight_d
field. This function is a Spark UserDefinedAggregateFunction (UDAF) that is built into Fusion. The function computes a weight for each aggregation group, using the count and an exponential decay on the signal timestamp, with a 30-day half life.GROUP BY query, doc_id
. The GROUP BY clause defines the fields used to compute aggregate metrics, which are typically the query
, doc_id
, and any filters. With SQL, you have more options to compute aggregated metrics without having to write custom JavaScript functions (which would be needed to supplement legacy aggregations). You can also use standard WHERE clause semantics, for example, WHERE type_s = 'add'
, to provide fine-grained filters.time_decay
function uses an abbreviated function signature, time_decay(count_i, timestamp_tdt)
, instead of the full function signature shown in Use Different Weights Based on Signal Types.q1
and document 1
:
q1
:
add
vs. click
):