Product Selector

Fusion 5.12
    Fusion 5.12

    Query-to-Query Session-Based Similarity Jobs

    This recommender is based on co-occurrence of queries in the context of clicked documents and sessions. It is useful when your data shows that users tend to search for similar items in a single search session. This method of generating query-to-query recommendations is faster and more reliable than the Query-to-Query Similarity recommender job, and is session-based unlike the similar queries previously generated as part of the Synonym Detection job.

    Default job name

    COLLECTION_NAME_query_recs

    Input

    Raw signals (the _<collection_signals collection by default).

    Output

    Queries-for-query recommendations (the COLLECTION_NAME_queries_query_recs collection by default)

    query
    count_i
    type
    timstamp_tdt
    user_id
    doc_id
    session_id
    fusion_query_id

    Required signals fields:

    required

    required

    [2]

    [3]

    [4]

    required

    required

    2 Required if you want to weight types differently.

    3 Required if you want to use time decay.

    4 Required if no session_id field is available. Either user_id or session_id is needed on response signals if doing any click path analysis from signals.

    The job generates two sets of recommendations based on the two approaches described below, then merges and de-duplicates them to present unique query-recommender pairs.

    Similar queries based on documents clicked Similar queries based on co-occurrence in sessions

    Queries are considered for recommendation if two queries have similar sets of document IDs clicked according to the signals data. This is directly implemented from the similar queries portion of the Synonym Detection job.

    This approach can work on both raw and aggregated signals.

    Queries are considered for recommendation if two queries have co-occurred in the same session based on the assumption that users search for similar items in a single search session (this may or may not hold true depending on the data).

    This approach, based on session/user IDs, needs raw signals to work.

    Query-to-Query Session-Based Similarity job dataflow

    Query-to-Query Session-Based Similarity job dataflow

    A default Query-to-Query Session-Based Similarity job (COLLECTION_NAME_query_recs) and a dedicated collection and pipeline are created when you enable recommendations for a collection.

    At a minimum, you must configure these:

    • an ID for this job

    • the input collection containing the signals data, usually COLLECTION_NAME_signals

    • the data format, usually solr

    • the query field name, usually query_s

    • the document ID field name, usually doc_id_s

    • optionally, the user session ID or user ID field name

      If this field is not specified, then the job generates click-based recommendations only, without session-based recommendations.

    Data tips

    • Running the job on other types of data than signals is not recommended and may yield unexpected results.

    • To get about 90% query coverage with the query pipeline, we recommend a raw signals dataset of about ~170k unique queries. More signals will generally improve coverage.

    • On a raw signal dataset of about 3 million records, the job finishes execution in about 7-8 minutes on two executor pods with one CPU and about 3G of memory each. Your performance may vary depending on your configuration.

    Boosting recommendations

    Generally if a query and recommendation has some token overlap, then they’re closely related and we want to highlight these. Therefore, query-recommendation pair similarity scores can be boosted based on token overlap. This overlap is calculated in terms of the number or fractions of tokens that overlap.

    For example, consider the pair (“a red polo shirt”, “red polo”). If the minimum match parameter is set to 1, then there should be 1 token in common. For this example there is 1 token in common and therefore it is boosted. If it is set to 0.5, then at least half of the tokens from the shorter string (in terms of space separated tokens) should match. Here, the shorter string is “red polo” which is 2 tokens long. Therefore, to satisfy the boosting requirement, at least 1 token should match.

    Tuning tips

    These tuning tips pertain to the advanced Model Tuning Parameters:

    • Special characters to be filtered out. Special characters can cause problems with matching queries and are therefore removed in the job.

      Only the characters are removed, not the queries, so a query like ps3$ becomes ps3.
    • Query similarity threshold. This is for use by the similar queries portion of the job and is the same as that used in the Synonym and Similar Queries Detection job.

    • Boost on token overlap. This enables or disables boosting of query recommendation pairs where all or some tokens match. How much match is required to boost can be configured using the next parameter.

      For example, if this is enabled, then a query-recommendation pair like (playstation 3, playstation console) is boosted with a similarity score of 1, provided the minimum match is set to 1 token or 0.5.

    • Minimum match for token overlap. Similar to the mm param in Solr, this defines the number/fraction of tokens that should overlap if boosting is enabled. Queries and recommendations are split by “ “ (space) and each part is considered a token. If using a less-than sign (<), it must be escaped using a backslash.

      The value can be an integer, such as 1, in which case that many tokens should match. So in the previous example, pair is boosted because the term “playstation” is common to both and the mm value is set to 1.

      The value can also be a fraction, in which case that fraction of the shorter member of the query and recommendation pair should match. For example, if the value is set to 0.5 and query is 4 tokens long and recommendation is 6 tokens long then there should be at least 2 common tokens between query and recommendation.

      Here the stopwords specified in the list of stopwords are ignored while calculating the overlap.

    • Query clicks threshold. The minimum number of clicked documents needed for comparing queries.

    • Minimum query-recommendation pair occurrence count. Minimum limit for the number of times a query-recommendation pair needs to be generated to make it to the final similar query recommendation list. Default is set to 2. Higher value will improve quality but reduce coverage.

    The similar queries collection

    The following fields are stored in the COLLECTION_NAME_queries_query_recs collection:

    • query_t

    • recommendation_t

    • similarity_d, the similarity score

    • source_s, the approach that generated this pair, one of the following: SessionBased or ClickedDocumentBased

    • query_count_l, the number of times the query occurred in signals

    • recommendation_count_l, the number of times recommendations occurred in signals

    • pair_count_l, the number of instances of the pair generated in the final recommendations using either of the approaches

    • type_s, always set to similar_queries

    The query pipeline

    When you enable recommendations, a default query pipeline, COLLECTION_NAME_queries_query_recs. is created.

    Use this job to to batch compute query-query similarities using a co-occurrence based approach

    id - stringrequired

    The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.

    <= 63 characters

    Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?

    sparkConfig - array[object]

    Spark configuration settings.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    trainingCollection - stringrequired

    Collection containing queries, document id and event counts. Can be either signal aggregation collection or raw signals collection.

    fieldToVectorize - stringrequired

    Field containing queries.

    >= 1 characters

    Default: query_s

    dataFormat - stringrequired

    Spark-compatible format that contains training data (like 'solr', 'parquet', 'orc' etc)

    >= 1 characters

    Default: solr

    trainingDataFrameConfigOptions - object

    Additional spark dataframe loading configuration options

    trainingDataFilterQuery - string

    Solr query to additionally filter the input collection.

    Default: *:*

    sparkSQL - string

    Use this field to create a Spark SQL query for filtering your input data. The input data will be registered as spark_input

    Default: SELECT * from spark_input

    trainingDataSamplingFraction - number

    Fraction of the training data to use

    <= 1

    exclusiveMaximum: false

    Default: 1

    randomSeed - integer

    For any deterministic pseudorandom number generation

    Default: 1234

    outputCollection - string

    Collection to store synonym and similar query pairs.

    dataOutputFormat - string

    Spark-compatible output format (like 'solr', 'parquet', etc)

    >= 1 characters

    Default: solr

    partitionCols - string

    If writing to non-Solr sources, this field will accept a comma-delimited list of column names for partitioning the dataframe before writing to the external output

    writeOptions - array[object]

    Options used when writing output to Solr or other sources

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    readOptions - array[object]

    Options used when reading input from Solr or other sources.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    specialCharsFilterString - string

    String of special characters to be filtered from queries.

    Default: ~!@#$^%&*\(\)_+={}\[\]|;:"'<,>.?`/\\-

    minQueryLength - integer

    Queries below this length (in number of characters) will not be considered for generating recommendations.

    >= 1

    exclusiveMinimum: false

    Default: 3

    maxQueryLength - integer

    Queries above this length will not be considered for generating recommendations.

    >= 1

    exclusiveMinimum: false

    Default: 50

    countField - string

    Solr field containing number of events (e.g., number of clicks).

    Default: count_i

    docIdField - stringrequired

    Solr field containing document id that user clicked.

    Default: doc_id_s

    overlapThreshold - number

    The threshold above which query pairs are consider similar. Decreasing the value can fetch more pairs at the expense of quality.

    <= 1

    exclusiveMaximum: false

    Default: 0.3

    minQueryCount - integer

    The minimum number of clicked documents needed for comparing queries.

    >= 1

    exclusiveMinimum: false

    Default: 1

    overlapEnabled - boolean

    Maximize score for query pairs with overlapping tokens by setting score to 1.

    Default: true

    tokenOverlapValue - number

    Minimum amount of overlap to consider for boosting. To specify overlap in terms of ratio, specify a value in (0, 1). To specify overlap in terms of exact count, specify a value >= 1. If value is 0, boost will be applied if one query is a substring of its pair.Stopwords are ignored while counting overlaps.

    Default: 1

    sessionIdField - string

    If session id is not available, specify user id field instead. If this field is left blank, session based recommendations will be disabled.

    Default: session_id_s

    minPairOccCount - integer

    Minimum number of times a query pair must be generated to be considered valid.

    >= 1

    exclusiveMinimum: false

    Default: 2

    stopwordsBlobName - string

    Name of the stopwords blob resource. This is a .txt file with one stopword per line. By default the file is called stopwords/stopwords_nltk_en.txt however a custom file can also be used. Check documentation for more details on format and uploading to blob store.

    Default: stopwords/stopwords_en.txt

    type - stringrequired

    Default: similar_queries

    Allowed values: similar_queries