<EXPERIMENT-NAME>-<METRIC-NAME>
, for example, Experiment-CTR
.
cart
, purchase
or like
signals. (These signal types are not predefined.)
For example, if you are interested in how many queries convert into cart
signals, specify the cart
signal type in the conversion rate metric.
The Click-Through Rate metric is a conversion rate for click
signals.
The job that generates the Conversion Rate metrics is named <EXPERIMENT-NAME>-<METRIC-NAME>
, for example, Experiment-Conversion
.
<EXPERIMENT-NAME>-<METRIC-NAME>
, for example, Experiment-MRR
.
mean
, variance
or max
) from response-time data. The default statistic is avg
(average, the same as mean
).
You can use the Response Time metric to evaluate the impact of adding additional stages to a query pipeline, for example, a recommendation or machine learning stage.
The response time is the end-to-end processing time from when a query pipeline receives a query to when the pipeline supplies a response:
<EXPERIMENT-NAME>-<METRIC-NAME>
, for example, Experiment-Response_time
.
Function name or alias | Description |
---|---|
avg | Mean response time |
kurtosis | Kurtosis of the response times |
max | Maximum response time |
mean | Mean response time |
median | Median response time. This is an alias for percentile(query_time,0.5) . |
min | Minimum response time |
percentile_N | Nth percentile of the response times, that is, the value at or closest to the percentile. N is an integer between 1 and 100. This is an alias for the function percentile(query_time,N/100) . |
skewness | Skewness of the response times |
sum | Sum of the response times |
stddev | Standard deviation of the response times |
variance | Variance of the response times |
variant_id
:
value
.* A double field that represents the metric provided by this custom SQLcount
.* The number of rows used to compute the value for a variant, that is, how many signals contributed to this valuevariant_id
. The unique identifier of the variantvariant_queries
is built into the experiment job framework. This view is transient and is not defined in the table catalog; it only exists for the duration of the metrics job. The variant_queries
view exposes all response signals for a given variant ID. The variant_queries
view exposes the following fields pulled from response signals:
Field | Description |
---|---|
id | Response signal ID set by a query pipeline and returned to the client application using the x-fusion-query-id response header |
variant_id | Experiment variant this response signal is associated with |
query_doc_ids | Comma-delimited list of document IDs returned in the response, in ranked order |
query_timestamp | ISO-8601 timestamp for the time when Fusion executed the query |
query_user_id | User associated with the query. The front-end application must supply this. |
query_rows | Number of rows returned for this query, that is, the page size |
query_hits | Total number of documents that match this query, that is, the number of documents that were found |
query_offset | Page offset |
query_time | Total time to execute the query (in milliseconds) |
fusion_query_id
field to join the variant_signals
view with other signal types such as click
. For example, if you want to get a count of clicks per variant, you would use:
value
, count
, and variant_id
columns as the output for our custom SQL; this is required for all custom SQL metrics.${inputCollection}
variable with the correct collection name at runtime, which is typically a signals collection.fusion_query_id
column to join click
signals with the id
column of the variant_queries
view. This illustrates how the variant_queries
view helps simplify the SQL you have to write to build a custom metric.click
signals. Behind the scenes, Fusion will send a query to Solr with fq=type:click
.query_offset
and query_rows
columns associated with each click in a variant:
<EXPERIMENT-NAME>-<METRIC-NAME>
, for example, Experiment-SQL
.
groundTruth
job use historical click signals to generate the ground truth data automatically.
Note that the Query Relevance metric does not calculate metrics based on live traffic. Instead, it issues the queries specified in the ground truth collection against each variant, and calculates the performance of the queries.
The jobs that generate the Query Relevance metrics are named <EXPERIMENT-NAME>-groundTruth-<METRIC-NAME>
and <EXPERIMENT-NAME>-rankingMetrics-<METRIC-NAME>
, for example, Experiment-groundTruth-QR
and Experiment-rankingMetrics-QR
.
groundTruth
job by hand the first time. Query Relevance rankingMetrics
jobs that run before the groundTruth
job runs do not produce metrics. Subsequently, the groundTruth
job runs once a month.Query | Document ID | Weight |
---|---|---|
hammer | 123 | 0.9 |
hammer | 456 | 0.8 |
hammer | 789 | 0.7 |
masking tape | 234 | 0.85 |
masking tape | 567 | 0.82 |
masking tape | 890 | 0.76 |