Ranking Metrics Jobs
Calculate relevance metrics (nDCG and so on) by replaying ground truth queries against catalog data using variants from an experiment.
Product Selector
Calculate relevance metrics (nDCG and so on) by replaying ground truth queries against catalog data using variants from an experiment.
use this job to calculate relevance metrics (nDCG etc..) by replaying ground truth queries (see ground truth job) against catalog data using variants from an experiment.
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.
<= 63 characters
Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?
Spark configuration settings.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Configure properties for Ground truth dataset
Input collection representing ground truth dataset
>= 1 characters
Solr filter queries to apply against Ground truth collection
Default: "type:ground_truth"
Query field in the collection
Default: query
Field containing ranked doc id's
Default: docId
Field representing the weight of document to the query
Default: weight
Configure properties for the experiment
Collection to run the experiment on
>= 1 characters
Pipeline variants for experiment
Doc id field to retrieve values (Must return values that match the ground truth data)
Default: id
Calculate ranking metrics using variants from experiment
>= 1 characters
Experiment objective name
>= 1 characters
Default query profile to use if not specified in experiment variants
Output collection to save the ranking metrics to
>= 1 characters
Ranking position at K for metrics calculation
Default: 10
Calculate ranking metrics per each query in ground truth set and save them to Solr collection
Default: true
Default: ranking_metrics
Allowed values: ranking_metrics