Product Selector

Fusion 5.11
    Fusion 5.11

    Head/Tail Analysis Jobs

    Perform head/tail analysis of queries from collections of raw or aggregated signals, to identify underperforming queries and the reasons. This information is valuable for improving overall conversions, Solr configurations, auto-suggest, product catalogs, and SEO/SEM strategies, in order to improve conversion rates.

    Default job name

    COLLECTION_NAME_head_tail

    Input

    Raw or aggregated signals (the COLLECTION_NAME_signals or COLLECTION_NAME_signals_aggr collections by default)

    Output

    • Rewrites for underperforming queries (the _query_rewrite_staging collection by default)

    • Analytics tables (the COLLECTION_NAME_job_reports collection by default)

    query
    count_i
    type
    timstamp_tdt
    user_id
    doc_id
    session_id
    fusion_query_id

    Required signals fields:

    required

    required

    required

    A minimum of 10,000 signals is required to successfully run this job.

    You can review the output from this job by navigating to Relevance > Rules > Rewrite > Head/Tail. See Underperforming query rewriting for more information.

    Head/tail analysis configuration

    The job configuration must specify the following:

    • The signals collection (the Input Collection parameter)

      Signals can be raw (the COLLECTION_NAME_signals collection) or aggregated (the _signals_aggr collection).

    • The query string field (the Query Field Name parameter)

    • The event count field

      For example, if signal data follows the default Fusion setup, then count_i is the field that records the count of raw signals and aggr_count_i is the field that records the count after aggregation.

    The job allows you to analyze query performance based on two different events:

    • The main event (the mainType/Main Event Type parameter)

    • The filtering/secondary event (the filterType/Filtering Event Type parameter)

      If you only have one event type, leave this parameter empty.

    For example, if you specify the main event to be clicks with minimum count of 0 and the filtering event to be queries with minimum count of 20, then the job will filter on the queries that get searched at least 20 times and check among those popular searched queries to see which ones didn’t get clicked at all or only a few times.

    An example configuration is shown below:

    Head/Tail job config

    The suggested schedule for this head-n-tail analysis job is to run bi-weekly or monthly. You can change schedule under the run panel.

    Job output

    By default, the output collection is the <input-collection>_job_reports collection. The head/tail job adds a set of analytics results tables to the collection. You can find these table names in the doc_type_s field of each document:

    • overall_distribution

    • summary_stat

    • queries_ordered

    • tokens_ordered

    • queryLength

    • tail_reasons

    • tail_rewriting

    You can use App Insights to visualize each of these tables:

    1. In the Fusion workspace, navigate to Analytics > App Insights.

      The App Insights dashboard appears.

    2. On the left, click Analytics Analytics button.

    3. Under Standard Reports, click Head Tail analysis.

      The Head/Tail Analysis job output tables appear. These are described in more detail below.

    Head/Tail Plot (overall_distribution)

    This head/tail distribution plot provides an overview of the query traffic distribution. In order to provide better visualization, the unique queries are in descending order based on traffic and put into bins of 100 queries on the x axis, with the sum of traffic coming from each bin on the y axis.

    For example, the head/tail distribution plot below shows a long tail, indicating that the majority of queries produce very little traffic. The goal of analyzing this data is to shorten that tail, so that a higher proportion of your queries produce traffic.

    Head/Tail Plot

    • Green = head

    • Yellow = torso

    • Red = tail

    Summary Stats (summary_stat)

    This user-configurable summary statistics table shows how much traffic is produced by various query groups, to help understand the head/tail distribution.

    Summary Stats table

    You can configure this table before running the job. Click Advanced in the Head/Tail Analysis job configuration panel, then tune these parameters:

    • Top X% Head Query Event Count (topQ)

    • Number of Queries that Constitute X% of Total Events (trafficPerc)

    • Bottom X% Tail Query Event Count (lastTraffic)

    • Event Count Computation Threshold (trafficCount)

    Query Details (queries_ordered)

    The Query Details table helps you discover which queries are the best performers and which are worst. You can filter results by issuing a search in the search bar. For example, search "segment_s:tail" to get tail queries or search "num_events_l:0" to get zero results queries. (Note: field names are listed in the "what is this" toolkit).

    Query Details table

    Top Tokens (tokens_ordered)

    The "Top Tokens" table lists the number of times each token shown in the queries.

    Query Details table

    Query Length (queryLength)

    This table shows how users are querying your database. Are most people searching very long strings or very short strings? These distributions will give you insight into how to tune your search engine to be performant on the majority of queries.

    Query Length table

    Tail Reasons table and pie chart (tail_reasons)

    Based on the difference between the tail and head queries, the Head/Tail Analysis job assigns probable reasons for why any given query is a tail query. Tail reasons are displayed as both a table and a pie chart:

    Tail Reasons table

    Tail Reasons pie chart

    Pre-defined tail reasons

    Based on Lucidworks' observations on different signal datasets, we summarize tail reasons into several pre-defined categories:

    spelling

    The query contains one or more misspellings; we can apply spelling suggestions based on the matching head.

    number

    The query contains an attribute search on a specific dimension. To normalize these queries we can parse the number to deal with different formatting, and/or pay attention to unit synonyms or enrich the product catalog. For example, "3x5" should be converted to "3’ X 5’" to match the dimension field.

    other-specific

    The query contains specific descriptive words plus a head query, which means the user is searching for a very specific product or has a specific requirement. We can boost on the specific part for better relevancy.

    other-extra

    This is similar to ‘other-specific’ but the descriptive part may lead to ambiguity, so it requires boosting the head query portion of the query instead of the specific or descriptive words.

    rare-term

    The user is searching for a rare item; use caution when boosting.

    re-wording

    The query contains a sequence of terms in a less-common order. Flipping the word order to a more common one can change a tail query to a head query, and allows for consistent boosting on the last term in many cases.

    stopwords

    Query contains stopwords plus head query. We would need to drop stopwords.

    Custom dictionary

    You can also specify your own attributes through a keywords file in CSV format. The header of the CSV file must be called "keyword" and "type", and stopwords must be called "stopword" for the program to recognize them.

    Below is an example dictionary that defines "color" and "brand" reason types. The job will parse the tail query, assign reasons such as "color" or "brand", and perform filtering or focused search on these fields. (Note: color and brand are also the field names in your catalog.)

    keyword,type
    a,stopword
    an,stopword
    and,stopword
    blue,color
    white,color
    black,color
    hp,brand
    samsung,brand
    sony,brand
    How to install a custom dictionary
    1. Construct the CSV file as described above.

    2. Upload the CSV file to the blob store.

      Note the blob ID.

    3. In the Head/Tail Analysis job configuration, enter the blob ID in the Keywords blob name (keywordsBlobName) field.

    Head Tail Similarity (tail_rewriting)

    For each tail query (the tailQuery_orig field), Fusion tries to find its closest matching head queries (the headQuery_orig field), then suggests a query rewrite (the suggested_query field) which would improve the query. The rewrite suggestions in this table can be implemented in a variety of ways, including utilizing rules editor or configuring a query parser that rewrites tail queries.

    Head Tail Similarity table

    Use this job when you want to compare the head and tail of your queries to find common misspellings and rewritings. See the insights analytics pane for a review of the results of the job.

    id - stringrequired

    The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.

    <= 63 characters

    Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?

    sparkConfig - array[object]

    Spark configuration settings.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    trainingCollection - stringrequired

    Signals collection containing queries and event counts. Raw signals or aggregation collection can be used. If aggregation collection is being used, update the filter query in advanced options

    >= 1 characters

    fieldToVectorize - stringrequired

    Field containing the queries

    >= 1 characters

    Default: query

    dataFormat - stringrequired

    Spark-compatible format that contains training data (like 'solr', 'parquet', 'orc' etc)

    >= 1 characters

    Default: solr

    trainingDataFrameConfigOptions - object

    Additional spark dataframe loading configuration options

    trainingDataFilterQuery - string

    Solr query to use when loading training data if using Solr (e.g. type:click OR type:response), Spark SQL expression for all other data sources

    Default: *:*

    sparkSQL - string

    Use this field to create a Spark SQL query for filtering your input data. The input data will be registered as spark_input

    Default: SELECT * from spark_input

    trainingDataSamplingFraction - number

    Fraction of the training data to use

    <= 1

    exclusiveMaximum: false

    Default: 1

    randomSeed - integer

    For any deterministic pseudorandom number generation

    Default: 1234

    outputCollection - string

    Solr collection to store head tail analytics results. Defaults to job reports collection

    dataOutputFormat - string

    Spark-compatible output format (like 'solr', 'parquet', etc)

    >= 1 characters

    Default: solr

    partitionCols - string

    If writing to non-Solr sources, this field will accept a comma-delimited list of column names for partitioning the dataframe before writing to the external output

    writeOptions - array[object]

    Options used when writing output to Solr or other sources

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    readOptions - array[object]

    Options used when reading input from Solr or other sources.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    tailRewriteCollection - string

    Collection where tail rewrites are stored.

    >= 1 characters

    analyzerConfigQuery - string

    LuceneTextAnalyzer schema for tokenization (JSON-encoded)

    >= 1 characters

    Default: { "analyzers": [ { "name": "StdTokLowerStem","charFilters": [ { "type": "htmlstrip" } ],"tokenizer": { "type": "standard" },"filters": [{ "type": "lowercase" },{ "type": "englishminimalstem" }] }],"fields": [{ "regex": ".+", "analyzer": "StdTokLowerStem" } ]}

    countField - stringrequired

    Field containing the number of times an event (like a click) occurs for a particular query; count_i in the raw signal collection or aggr_count_i in the aggregated signal collection.

    >= 1 characters

    Default: count_i

    mainType - stringrequired

    The main signal event type (e.g. click) that head tail analysis is based on. E.g., if main type is click, then head and tail queries are defined by the number of clicks.

    >= 1 characters

    Default: click

    filterType - string

    The secondary event type (e.g. response) that can be used for filtering out rare searches. Note: In order to use the `response` default value, please make sure you have type:response in the input collection. If there is no need to filter on number of searches, please leave this parameter blank.

    Default: response

    signalTypeField - stringrequired

    The field name of signal type in the input collection.

    Default: type

    minCountMain - integer

    Minimum number of main events (e.g. clicks after aggregation) necessary for the query to be considered. The job will only analyze queries with clicks greater or equal to this number.

    Default: 1

    minCountFilter - integer

    Minimum number of filtering events (e.g. searches after aggregation) necessary for the query to be considered. The job will only analyze queries that were issued greater or equal to this number of times.

    Default: 20

    queryLenThreshold - integer

    Minimum length of a query to be included for analysis. The job will only analyze queries with length greater than or equal to this value.

    Default: 2

    userHead - number

    User defined threshold for head definition. value=-1.0 will allow the program to pick the number automatically. value<1.0 denotes a percentage (e.g 0.1 means put the top 10% of queries into the head), value=1.0 denotes 100% (e.g 1 means put all queries into the head), value>1.0 denotes the exact number of queries to put in the head (e.g 100 means the top 100 queries constitute the head)

    Default: -1

    userTail - number

    User defined threshold for tail definition. value=-1.0 will allow the program to pick the number automatically. value<1.0 denotes a percentage, (e.g 0.1 means put the bottom 10% of queries into the tail) value=1.0 denotes 100% (e.g 1 means put all queries into the tail), value>1.0 denotes the exact number of queries to put into the tail (e.g 100 means the bottom 100 queries constitute the tail).

    Default: -1

    topQ - array[number]

    Compute how many total events come from the top X head queries (Either a number greater than or equal to 1.0 or a percentage of the total number of unique queries)

    Default: 1000.01

    trafficPerc - array[number]

    Compute how many queries constitute each of the specified event portions(E.g., 0.25, 0.50)

    Default: 0.250.50.75

    lastTraffic - array[number]

    Compute the total number of queries that are spread over each of the specified tail event portions (E.g., 0.01)

    Default: 0.01

    trafficCount - array[number]

    Compute how many queries have events less than each value specified (E.g., a value of 5.0 would return the number of queries that have less than 5 associated events)

    Default: 5

    keywordsBlobName - string

    Name of the keywords blob resource. Typically, this should be a csv file uploaded to blob store in a specific format. Check documentation for more details on format and uploading to blob store

    >= 1 characters

    lenScale - integer

    A scaling factor used to normalize the length of the query string. This filters head and tail string match based on if edit_dist <= string_length/length_scale. A large value for this factor leads to a shorter spelling list. A smaller value leads to a longer spelling list but may add lower quality corrections.

    Default: 6

    overlapThreshold - integer

    The threshold for the number of overlapping tokens between the head and tail. When a head string and tail string share more tokens than this threshold, they are considered a good match.

    Default: 4

    tailRewrite - boolean

    If true, also generate tail rewrite table, o.w., only get distributions. May need to set it to false in the very first run to help customize head and tail positions.

    Default: true

    sparkPartitions - integer

    Spark will re-partition the input to have this number of partitions. Increase for greater parallelism

    Default: 200

    enableAutoPublish - boolean

    If true, automatically publishes rewrites for rules. Default is false to allow for initial human-aided reviewing

    Default: false

    type - stringrequired

    Default: headTailAnalysis

    Allowed values: headTailAnalysis