Set up semantic vector search
These instructions are for Milvus and do not work with Solr SVS. |
These are the suggested steps to get started with semantic vector search (SVS).
-
Craft training data.
The training data fed to SVS models creates connections between consumer queries and products from a catalog. Crafting the training data is the biggest influence in terms of the performance and quality a SVS model can deliver.
-
Diagnose Zero Search Result themes.
Identify the common themes across top zero result queries to craft the right kinds of training data appropriate for the kinds of problem themes.
This step involves looking at a report of the top zero search result queries and grouping them into categories such as:-
In Stock/Out of Stock
-
Product Not Carried
-
Foreign Vocabulary
-
Misspelling
-
-
Create Zero Search Result training pairs.
A Zero Search Result Query (ZRQ) is the most obvious indicator that a search engine is struggling to retrieve relevant results, so create training data pairs geared toward learning what consumers were looking for when submitting such terms. Zero Search Result (ZSR) training pairs are created by looking at what a consumer did after getting to a zero results page. More specifically, look at what products a consumer adds to cart following a zero results page. For a ZSR query, a subset will try to search another way, either through using different search terms or by browsing category pages. This is referred to as "Learning from the Persistent Shopper." In the end, pairs of data are created which look like
ZSR Query, Next Item Added-to-Cart
. -
Create Abandoned Search training pairs.
Another indicator for troublesome queries or poor relevance are Abandoned Searches (AS). An AS is defined when a consumer submits a query and receives search results but does not click on any of those results. Look at what a consumer adds to their cart following an AS. The pairs of this type follow the format of
AS query, Next Item Added-to-Cart
. -
Create Converted Search training pairs.
The two prior types of training data outline a circumventive consumer path to find relevant results. These circumventive paths are essential to enhance the coverage of the SVS models in terms of edge cases and understanding foreign vernacular to the product catalog. The final type of training data, Converted Search (CS), outlines the 'happy path' a consumer takes. A happy path occurs when a consumer submits a search, finds exactly what they are looking for in the search results, then adds that item to their cart.
This type of training data is essential in enhancing the precision and accuracy of the SVS model by providing positively converting pairs where there is a direct correlation between the query submitted and the search results returned. This is also the simplest type of training data to create, as it is just an aggregation of already collected
Search Query, Item Added-to-Cart
pairs. -
Apply a filter.
The last step of the training data crafting process is to apply a filter to the aggregated training data pairs collection to remove noisy data pairs for ZRQ and AS types. A noisy data pair is defined by a pair in which the outlined search query has no connection to the next item added to cart. Noisy signals can manifest in several different ways, such as shopping from a list of items and not finding a product. Instead of being persistent and using different methods to find that or a similar product, a shopper might move on to the next, unrelated, item on their list. To remove noisy pairs from the training data collection, look at the pairs with low aggregation counts and selectively remove them.
-
Train the SVS models.
After validating that the training data pairs were created appropriately, utilize this training data to build SVS deep learning models using supervised learning methodology. This means the pre-trained models are used as a base for understanding relationships between items in the product catalog, then the crafted training data is used to refine the baseline relationships specific to a business and its consumers. To maximize the effectiveness of these SVS models, conduct model trials (or iterations) to find the best configuration and training data combination.
-
Deploy model.
Upon completion of a Supervised Training Job, a model is deployed using Seldon-Core, an open source technology used to deploy and manage models within a Kubernetes cluster. This Seldon-Core model deployment is used both in the index and query pipeline procedures.
-
Index product catalog.
Part of the implementation of SVS includes encoding the product catalog into a vector space by creating a vector for each product within the catalog and storing them in an appropriate collection. Milvus is an open source vector search similarity engine and is used to store the encoded vectors from the product catalog and to perform vector similarity searches in query pipelines.
-
Integrate with an index pipeline.
To populate a Milvus collection with vectors from the catalog, add Encode into Milvus stage to the index pipeline. This Encode into Milvus stage vectorizes each product within the catalog and stores it inside a Milvus collection as the products are being indexed into a Solr collection. Each vector in a Milvus collection has a corresponding
milvus_id
stored as part of the product record in the Solr collection. -
Set up a job in Create collections in Milvus.
Give the job and collection a name and fill in the Dimension parameter. The Dimension parameter is always 2x the value input for the RNN Function Units List parameter in the Supervised Training job. For example, if the RNN Function Units List parameter is 128, then the Dimension parameter is 256.
-
Refresh the trained model.
The trained model also needs a periodic refresh to account for changes in consumer behavior, such as seasonal changes. To refresh the SVS models, a new model is trained using the most recent consumer training data and the catalog is reindexed using the freshly created model and queries are switched to also be encoded using the new model.
-
Integrate with a query pipeline.
Prepare the SVS solution for load testing and production by integrating vector search with query pipelines. There are three main components to this integration:
-
Ensure that the Vector Search component of the pipeline is only triggered when zero results are returned by the main query pipeline.
-
Ensure that the results returned by vector search maintain the functionality of results from the traditional pipeline, such as sorting and faceting.
-
Ensure that searches involving the vector search components are tagged appropriately in the generated signals for analytics purposes.
-
-
Conduct load testing.
Test load based on needs, for example against a 20% null result rate at peak QPS plus comfort room (20% of 1200 QPS + comfort room = ~300 QPS for vector search). If the initial vector search configuration does not perform well against such benchmarks, consider configuration changes:
-
Optimization of resource allocation
-
Turning off Solr spell check within the vector search pipeline. Removing spell check is the main configuration change needed to drive the query response times down to acceptable levels.
-