Skip to main content
Release date: 28 February 2019 Component versions:
Solr 7.5ZooKeeper 3.4.13Spark 2.3.1Jetty 9.4.11.v20180605Ignite 2.3.0
More information about support dates can be found at Lucidworks Fusion Product Lifecycle.

New features

  • Smarter relevancy with query rewriting Query rewriting turns ineffective queries into more relevant results by automatically correcting misspelled terms, expanding queries to include synonyms, boosting known phrases, and applying your own business rules. 4.2.0 ties together existing query rewriting features and adds new features to make query rewriting easier to configure:
  • Refine the final search results with response rewriting Similar to query rewriting, response rewriting can apply machine learning, business rules, or other criteria to Solr’s response, refining the final set of search results before Fusion sends them to the search application. Response rewriting can be performed using rules and a set of new query pipeline stages that fall into two categories:
  • More Natural Language Processing (NLP) power in pipelines This release includes a new NLP Annotator index stage and query stage that leverage the popular John Snow NLP library for Spark, introducing these new NLP features in addition to the existing Named Entity Extraction (NER) functionality:
See the Fusion Server 4.2.0 release notes and the App Studio 4.2.0 release notes for other changes.

Improvements

  • Experiments can now be configured as Plan an Experiment, by selecting the new Automatically Adjust Weights Between Variants option when Set Up an Experiment using a Query Profile.
  • A Part-of-Speech (POS) model is now available in the blob store by default, as en-pos-maxent.bin, for use by the Phrase Extraction job.
From a planning standpoint, an experiment has these parts:
  • A baseline control. One of the experiment variants will be the control. This is “how we are doing things today.” If you are experimenting from the start, choose the simplest variant as the control.
  • Experiment variants. Experiment variants other than the control are attempts to improve the user’s extended search experience. Which relevancy strategy works best for your search app and your users?
  • Metrics. This is how you know whether the search variants produce differences in user interactions, and whether the differences are statistically significant.
In the remainder of this topic, you will make decisions about these broad areas, as well as about experiment details.

Plan what you want to vary

Identify different relevancy strategies, where each represents a hypothesis about which user experience will drive more click-throughs, purchases, and so on. Use the Query Workbench to explore how to produce different search results and recommendations using different query pipelines, and evaluate which ones might engage your users most effectively.

Plan what you want to measure

Metrics compare the control against other variants pairwise. For example, if the variants are experiment, B, C, and D, and you choose experiment as the control, then the comparisons for which metrics are generated will be experiment/B, experiment/C, and experiment/D.You can learn more about metrics.

Design the experiment

When designing an experiment, you must make these decisions:
  • How users are identified
  • Percentage of total traffic to send through the experiment
  • Number of variants and how they differ
  • Metrics to generate
In many cases identifying users is straightforward, using an existing user ID or session ID if the application has one. In other cases, you may need to generate an identifier of some sort to send in on queries. It is important to send in some kind of identifier with each query so that the experiment can route the query to a variant, and to send that same identifier with any subsequent signals that resulted from that query. Queries without a user ID will not be routed through the experiment.The percentage of total traffic to send through the experiment is the one variable that can change over the course of the experiment. It is often a good practice to start out sending only a small percentage of search traffic through a new experiment, in order to verify that each of the variants are functioning properly. Then, once you have established that the behavior is as intended, you can increase the percentage of traffic through the experiment to the desired level.With modest usage and for a possibly small effect, or when testing multiple variants at the same time, you might want to send 100% of users through the experiment and let it run longer. For high usage and an effect that is expected to be larger, and with only two variants, you might not need to send all users through the experiment and the experiment will not take as long.

Choose traffic weights

Fusion AI uses traffic weights to apportion search traffic among the variants. This allows you to send a different percentage of traffic through each variant if desired.

Automatic traffic weights (multi-armed bandit)

Automatic traffic weights are only available in Fusion 4.2+.
The Automatically Adjust Weights Between Variants configuration option enables multi-armed bandits and eliminates the need to specify a traffic weight for each variant.In multi-arm bandit mode, metrics jobs are created automatically once the experiment starts. In Fusion 4.2.x, schedules are also automatically created. In Fusion 5.x, however, the jobs must be scheduled manually. It is recommended that you schedule the metrics jobs to run on an hourly basis. The weights between variants only change after the metrics jobs run.Fusion’s multi-arm bandit implementation uses a variation of Thompson Sampling (sometimes called Bayesian Bandits). This algorithm uses the current count of successes versus failures to build a beta distribution that represents the level of confidence in the primary metric value for each variant. It then samples a random number from each variant’s distribution, and picks the highest number.This type of implementation has three effects:
  • It weights better-performing variants higher. Since the beta distribution of each variant is centered around the primary metric value for that variant, a random number selected from a higher-performing variant is likely to be higher than a random number picked from a lower-performing variant.
  • Lower-performing variants remain in play Picking a random number from each distribution preserves the chance that Fusion will try a lower-performing variant, as long as there is still a chance that it is better.
  • The more confident the measurements, the narrower the beta distributions become. The more uncertain the measurements, the wider the distributions will be, and thus the more likely that Fusion will choose variants that appear to be performing more poorly.
Since Fusion adjusts the weights between variants each time the metrics jobs run, users can still get different results on subsequent visits. For example, if variant A is getting 80% of traffic, but after recalculating metrics it is only getting 50% of traffic, then some users who were previously assigned to variant A will be assigned to variant B. However, only the bare minimum of users will be reassigned to a new variant. Most users will see no changes. Once the experiment has been running for some time, the changes between the variants should be fairly small, so relatively few users should be affected.

Manually specifying traffic weights

The formula for variant A is:
Proportion~A~ = (Traffic weight~A~)/(Sum of traffic weights for all variants)
For example:
Variant traffic weightsSum of traffic weightsVariant proportions
1.0 1.020.5 0.5
1.0 1.0 2.040.25 0.25 0.5
0.5 1.0 1.0 2.550.1 0.2 0.2 0.5

Calculate sample sizes

Fusion will calculate the required sample size to detect a statistically significant result based on the results at runtime. The “confidence level” metric has this minimum sample size factored in, so that confidence is always low for experiments that have not yet reached their required sample size.However, if you would like to use different power or significance level in evaluating your experiment (Fusion will use 0.08 and 0.05), or if you would like to establish your own sample size based on a desired minimum detectable effect, you may do so.

Choose an implementation approach

You can construct an experiment in either of two ways:
  • Experiment and query profile. (recommended) For most cases, you will want to create additional query pipelines that return different search results. A query profile directs traffic through the query pipelines in accordance with the traffic weights of experiment variants.
  • Experiment stage in a query pipeline. If you want to use parts of a single query pipeline in all experiment variants, you can add an Experiment stage to that pipeline (the pipeline that receives search queries). The app can direct queries to the endpoint of a query profile that references the pipeline (recommended) or to the endpoint of the query pipeline. If used, the query profile does not reference an experiment.

Next step

You have planned the experiment. Next, you will set it up using either a query profile or an Experiment stage. This guide includes both options.
Perform the tasks needed to set up an experiment. This topic describes how to set up an experiment using a query profile that references the experiment.

Before you begin

Before you set up an experiment, you must already have:
  • A search app. The key aspect of the search app for experiments is that the search app identifies users in some way. A user ID might be associated with users persistently (this is best) or only for the current session. Searches and subsequent actions by anonymous users are not sent through an experiment.
  • A Fusion app. The Fusion app provides the search functionality you want to provide. Below, you will modify the app to include an experiment.
  • Data for users to search. The app should have data that users can search and search results that users can interact with. Typically, users will search a single dataset in the sense that different users are not given search results from different datasets. But in an experiment, different experiment variants can use data in different collections.
  • Results for users to interact with. Experiment metrics depend on users interacting with search results, for example, clicking on them. A search app uses signals to report the interactions to Fusion.
  • A plan for the experiment. This plan includes which control and variants to compare, the projected traffic, sample sizes, experiment duration, metrics, and decision criteria.
Basically, you need a working system in some environment on which you want to perform experiments, and a plan for experiment variants and for deciding which results are best.

Create query pipelines

The primary approach for experiment variants is to have multiple query pipelines. You must create additional query pipelines as needed before creating the experiment.
You can also vary the collection that is searched and query parameters, which you do when creating the experiment. In fact, if that is all you are varying, you can define an experiment that uses a single query pipeline. (And you can skip this section.)
You will need the following pipelines:
  • Pipeline for the control variant. The first variant in an experiment is the control. The A/B/n experiment metrics compare the control variant pairwise with the other variants. The control variant should probably use the default query pipeline. Here, we assume that is the case.
    You could modify the configuration of the default query pipeline (which the control variant will use) before the experiment. However, doing so has two disadvantages. First, pairwise comparisons in the experiment will not be against the status quo. Second, Fusion AI will not have a baseline of historical signal data to compare against (that comparison lets an experiment reach higher confidence numbers more quickly).
  • Pipelines for other variants. Other variants in an experiment can use other query pipelines, each with different stages and/or that are configured differently. These are “how you might do things differently.”
How to create query pipelines for non-control variantsCreate and configure the additional query pipelines you need for non-control variants. Repeat these steps for each pipeline.
  1. Navigate to Querying Querying > Query Pipelines.
  2. Click Add.
  3. Enter a Pipeline ID (arbitrary name) for the pipeline, and then click Save.
Add pipeline4. Modify the pipeline as desired, for example, by adding, configuring, and reordering stages. 5. Click Save.

Create the query profile and the experiment

In the Fusion UI, you can use either of these equivalent approaches to set up an experiment:

Option A: Create the experiment in the Query Profiles UI

  1. Choose or create a query profile. Choose an existing query profile (for example, the default query profile) or create a new query profile for the experiment. How to choose an existing query profile:
    1. Navigate to Querying Querying > Query Profiles.
    2. In the left pane, click the query profile you want to use.
    3. Verify that the pipeline, search handler, collection ID, and parameters are correct.
    4. (Optional) Click New params and specify URL parameters to add to all queries for this query profile. Use existing query profile
    How to create a new query profile:
    1. Navigate to Querying Querying > Query Profiles.
    2. Click New.
    3. Enter a Query Profile ID (an arbitrary name for the query profile).
    4. Choose a pipeline and a collection.
    5. (Optional) Click New params and specify URL parameters to add to all queries for this query profile. Create new query profile
  2. Enable experimentation and specify experiment details in the query profile:
    1. Click the checkbox in front of Enable experimentation.
    2. Specify the percent of traffic to include in the experiment.
    3. Click the dropdown arrow, and then click Add Experiment. Add experiment in query profile
    4. Enter an arbitrary ID (name) for the experiment.
    5. Verify that the unique ID parameter is correct. It is the parameter that uniquely identifies each user. The default is userId. Correct the parameter if necessary, for example by specifying the session ID field instead.
    6. Choose the base collection for signals. Signals resulting from requests that flow through the experiment are stored in the _signals collection associated with this collection.
    7. (Optional) Enter a description for the experiment.
    8. (Optional in Fusion 4.2+) To use a multi-armed bandit, select Automatically Adjust Weights Between Variants.
    9. Add variants. Click Add Variant to add each non-control variant in your experiment. Add variants in experiment
    10. For each variant:
    11. Enter an arbitrary name. For the first variant, which is the control, Fusion uses the name control. You can change that name if you wish.
    12. Click Specify what varies and specify what varies. Items you select are visible in the variant UI and have a green check mark in the dropdown menu. You can vary the query pipeline, query parameters (URL parameters to add to the query), and/or the collection. Specify what varies
    13. (For query parameters) Click New params. In the dialog box, specify the Parameter Name, Parameter Value, and Update Policy for each parameter (append, default, remove, or replace).
  3. Add metrics. For each metric:
  4. Click Add Metric and select the type of metric.
  5. Fill in information for the metric. Add metrics in query profile
  6. Click Save to save the experiment.

Option B: Create the experiment in the Experiments UI

You can create the experiment first and reference it from a query profile.
  1. Create an experiment. The experiment defines variants and metrics, as well as the user ID parameter and the base collection for signals:
  2. Navigate to Analytics Analytics > Experiments.
  3. Click New.
  4. Enter an arbitrary ID (name) for the experiment.
  5. Verify that the unique ID parameter is correct. It is the parameter that uniquely identifies each user. The default is userId. Correct the parameter if necessary, for example by specifying the session ID field instead.
  6. Choose the base collection for signals. Signals resulting from requests that flow through the experiment are stored in the _signals collection associated with this collection.
  7. (Optional) Enter a description for the experiment. Configure experiment in Experiments
  8. (Optional in Fusion 4.2+) To use a multi-armed bandit, select Automatically Adjust Weights Between Variants.
  9. Add variants. Click Add Variant to add each non-control variant in your experiment. Add variants in experiment
  10. For each variant:
  11. Enter an arbitrary name. For the first variant, which is the control, Fusion uses the name control. You can change that name if you wish.
  12. Click Specify what varies and specify what varies. Items you select are visible in the variant UI and have a green check mark in the dropdown menu. You can vary the query pipeline, query parameters (URL parameters to add to the query), and/or the collection. Specify what varies
  13. (For query parameters) Click New params. In the dialog box, specify the Parameter Name, Parameter Value, and Update Policy for each parameter (append, default, remove, or replace).
  14. Add metrics. For each metric:
  15. Click Add Metric and select the type of metric.
  16. Fill in information for the metric. Add variants in experiment
  17. Click Save to save the experiment.
  18. Reference the experiment from the query profile. Open the query profile you want to use for the experiment.
    1. Navigate to Querying Querying > Query Profiles.
    2. Click the query profile you want to use for the experiment.
    3. (Optional) If necessary, modify the query profile ID, default pipeline, and/or the search handler. These modifications are not related to experiments and are probably not required.
    4. (Optional) Click New params and specify URL parameters to add to all queries for this query profile.
    5. Click the checkbox in front of Enable experimentation.
    6. Specify the percent of traffic to include in the experiment.
    7. Click the dropdown arrow, and then click the experiment name.
    Configure experiment in Experiments 8. Click Save to save the query profile.

Next step

You have set up the experiment. Alternatively, you will set it up using an Experiment stage. If you chose to use this article for setting up your experiment, you can skip the next article.

Known issues

  • In Underperforming Query Rewriting and Misspelling Detection, new or modified query rewrites cannot be saved when any of their values include trailing or leading whitespace. Remove any trailing or leading whitespace to save the query rewrite.
  • The Query Rewriting UI’s search feature joins multiple search terms using OR instead of AND. For example, searching for a rule called “Test 1” returns “Test 1”, “Test 2”, and “1 Rule”.
  • The Query Rewriting UI’s search feature may return a Problem with underlying storage if your query is *:* followed by an additional query term. This is an invalid query; use only *:* to search for all query rewrites.
  • In Underperforming Query Rewriting, job-generated query improvements do not preserve the original query’s uppercase and lowercase characters. For example, an underperforming query containing “brandX” may be rewritten to contain “brandx”. You may need to manually modify the query improvements to preserve the correct cases.
  • After selecting multiple business rules where some rules have tags, adding more tags to the selected rules deletes their existing tags. To work around this, add tags to individual rules instead of adding them in bulk.
See also the Fusion Server 4.2.0 known issues.
I