Looking for the old site?

How To

Browse By

  • Objective

  • Products

  • User Role

    How To
    Documentation
      Learn More

        Configure the Smart Answers Pipelines (5.3 and above)

        Before beginning this procedure, train a machine learning model using either the FAQ method or the cold start method.

        For instructions for Fusion 5.1 and 5.2, see Configure The Smart Answers Pipelines (5.1 and 5.2 only).

        Regardless of how you set up your model, the deployment procedure is the same:

        1. Create the Milvus collection

        For complete details about job configuration options, see the Create Collections in Milvus job.

        1. Navigate to Collections > Jobs > Add + and select Create Collections in Milvus.

        2. Configure the job:

          1. Enter an ID for this job.

          2. Under Collections, click Add.

          3. Enter a collection name.

          4. In the Dimension field, enter the dimension size of vectors to store in this Milvus collection. The Dimension should match the size of the vectors returned by the encoding model. For example, the Smart Answers Pre-trained Coldstart models outputs vectors of 512 dimension size. Dimensionality of encoders trained by Smart Answers Supervised Training job depends on the provided parameters and printed in the training job logs. See Smart Answers Supervised model training for more details.

        3. Click Save.

          The Create Collections in Milvus job can be used to create multiple collections at once. In this image, the first collection is used in the indexing and query steps. The other two collections are used in the example. Create Collections in Milvus job

        4. Click Run > Start to run the job.

        2. Configure the index pipeline

        1. Open the Index Workbench.

        2. Load or create your datasource using the default smart-answers index pipeline.

          smart-answers default index pipeline

        3. Configure the Encode into Milvus stage:

          1. change the value of Model ID to match the model deployment name you chose when you configured the model training job.

          2. Change Field to Encode to the document field name to be processed and encoded into dense vectors.

          3. Ensure the Encoder Output Vector matches the output vector from the chosen model.

          4. Ensure the Milvus Collection Name matches the collection name created via the Create Milvus Collection job.

            To test out your settings, turn on Fail on Error in the Encode into Milvus stage and Apply the changes. This will cause an error message to display if any settings need to be changed.

            Encode Into Milvus index stage

        4. Save the datasource.

        5. Index your data.

        3. Configure the query pipeline

        1. Open the Query Workbench.

        2. Load the default smart-answers query pipeline.

          smart-answers default query pipeline

        3. Configure the Milvus Query stage:

          1. Change the Model ID value to match the model deployment name you chose when you configured the model training job.

          2. Ensure the Encoder Output Vector matches the output vector from the chosen model.

          3. Ensure the Milvus Collection Name matches the collection name created via the Create Milvus Collection job.

          4. Milvus Results Context Key can be changed as needed. It will be used in the Milvus Ensemble Query Stage to calculate the query score.

            Milvus Query stage

        4. In the Milvus Ensemble Query stage, update the Enemble math expression as needed based on your model and the name used in the prior stage for the storing the Milvus results.

          Milvus Ensemble Query stage

        5. Save the query pipeline.

        Pipeline Setup Example

        Index and retrieve the question and answer together

        To show question and answer together in one document (that is, treat the question as the title and the answer as the description), you can index them together in the same document. You can still use the default smart-answers index and query pipelines with a few additional changes.

        Prior to configuring the Smart Answers pipelines, use the Create Milvus Collection job to create two collections, question_collection and answer_collection, to store the encoded "questions" and the encoded "answers", respectively.

        Index Pipeline

        As shown in the pictures below, you will need two Encode into Milvus stages, named Encode Question and Encode Answer respectively.

        Encode Question (Encode Into Milvus) stage

        Pipeline setup example - Encode Question stage

        Encode Answer (Encode Into Milvus) stage

        Pipeline setup example - Encode Answer stage

        In the Encode Question stage, specify Field to Encode to be title_t and change the Milvus Collection Name to match the new Milvus collection, question_collection.

        In the Encode Answer stage, specify Field to Encode to be description_t and change the Milvus Collection Name to match the new Milvus collection, answer_collection.

        (For more detail, see Smart Answers Detailed Pipeline Setup.)

        Query Pipeline

        Since we have two dense vectors generated during indexing, at query time we need to compute both query to question distance and query to answer distance. This can be set up as the pictures shown below with two Milvus Query Stages, one for each of the two Milvus collections. To store those two distances separately, the Milvus Results Context Key needs to be different in each of these two stages.

        In the Query Questions stage, we set the Milvus Results Context Key to milvus_questions and the Milvus collection name to question_collection. .Query Questions (Milvus Query) stage Pipeline setup example - Query Questions stage

        In the Query Answers stage, we set the Milvus Results Context Key to milvus_answers and the Milvus collection name to answer_collection. .Query Answers (Milvus Query) stage Pipeline setup example - Query Answers stage

        Now we can ensemble them together with the Milvus Ensemble Query Stage with the Ensemble math expression combining the results from the two query stages. If we want the question scores and answer scores weighted equally, we would use: 0.5 * milvus_questions + 0.5 * milvus_answers. This is recommended especially when you have limited FAQ dataset and want to utilize both question and answer information.

        Milvus Ensemble Query stage

        Pipeline setup example - Milvus Ensemble Query stage

        Evaluate the query pipeline

        The Evaluate QnA Pipeline job evaluates the rankings of results from any Smart Answers pipeline and finds the best set of weights in the ensemble score. See Evaluate a Smart Answers Pipeline for setup instructions.

        Detailed pipeline setup

        Typically, you can use the default pipelines included with Fusion AI. This topic provides information you can use to customize these pipelines. See also Configure The Smart Answers Pipelines.

        "question-answering" index pipeline

        question-answering default index pipeline

        "question-answering" query pipeline

        question-answering default query pipeline

        Index pipeline setup

        Stages in the default "question-answering" index pipeline

        question-answering default index pipeline

        Typically, only one custom index stage needs to be configured in your index pipeline:

        If you are using a dynamic schema, make sure this stage is added after the Solr Dynamic Field Name Mapping stage.

        There are several required parameters:

        Query pipeline setup

        Stages in the default "question-answering" query pipeline

        question-answering default query pipeline

        In the "question-answering" query pipeline, all stages (except the Solr Query stage) can be tuned for better Smart Answers:

        Optionally, you can also:

        The Query Fields stage

        The first stage is Query Fields which should have one main parameter specified:

        • Return Fields - Since documents should be retrieved with vectors and clusters, make sure to include the documents’ Vector Field and Clusters Field, which are compressed_document_vector_s and document_clusters_ss by default.

        The Rewrite Pagination Parameters for Reranking stage

        The Rewrite Pagination Parameters for Reranking query stage is used to specify how many results from Solr should be used to create a pool of candidates for further re-ranking via model. It’s done under the hood, so users would still see the number of results controlled by "start" and "rows" query parameters.

        • Number of Results - Number of results to request from Solr, to be re-ranked by downstream stages. Default value: 500. This parameter affects both results accuracy and query time performance. Increasing the number can lead to better accuracy but worse query time performance and vice versa.

          This parameter can be dynamically overwritten by rowsFromSolrToRerank raw query parameter in the request.

        The Filter Stop Words stage

        When limited data is provided to train FAQ or coldstart models, the model may not be able to learn the weights of stop words accurately. Especially when using Solr to retrieve the top x results (if Number of Clusters is set to 0 in the stop words may have high impact on Solr. User can use this stage to provide customized stop words list by providing them in Words to filter part of the stage.

        See Filter Stop Words for reference information. See also Best Practices.

        The Escape Query stage

        The Escape Query query stage escapes Lucene/Solr reserved characters in a query. In the Smart Answers use case, queries are natural-language questions. They tend to be longer than regular queries and with punctuation which we do not want to interpret as Lucene/Solr query operators.

        The Query-Document Vectors Distance stage

        The Query-Document Vectors Distance query stage computes the distance between a query vector and each candidate document vector. It should be added after the Solr Query stage.

        • Query Vector Context Key - The context key which stores dense vectors. It should be set to the same value as _Vector Context Key _in the Default value: query_vector.

        • Document Vector Field - The field which stores dense vectors or their compressed representations. It should be the same as _Vector Field _in Default: compressed_document_vector_s.

        • Keep document vector field - This option allows you to keep or drop returned document vectors. In most cases it makes sense to drop document vectors after computing vector distances, to improve run time performance. Default: False (unchecked).

        • Distance Type - Choose one of the supported distance types. For a FAQ solution, cosine_similarity is recommended, which produces values with a maximum value of 1. Higher value means higher similarity. Default: cosine_similarity.

        • Document Vectors Distance Field - The result document field that contains the distance between its vector and the query vector. Default: vectors_distance.

        The Compute Mathematical Expression stage

        We can use the Compute Mathematical Expression query stage to combine the Solr score with the vectors similarity score to borrow Solr’s keywords matching capabilities. This stage should be set before the Result Document Field Sorting stage. Result field name in this stage and Sort field in the Result Document Field Sorting stage should be set to the same value.

        • Math expression - Specify a mathematical formula for combining the Solr score and the vector similarity score. Note that Solr scores do not have an upper bound, so it is best to rescale it by using the max_score value, which is the max Solr score from all returned docs. vectors_distance is the cosine similarity in our case and it already has an upper bound of 1.0. The complete list of supported mathematical expressions is listed in the Oracle documentation.

          Default: 0.3 * score / max_score + 0.7 * vectors_distance

          We recommend adjusting the ratios 0.3 and 0.7 to other values based on query-time evaluation results (mentioned in the section above). Feel free to try other math expressions (such as log10) to scale the Solr score as well.
        • Result field name - The document field name that contains the combined score. The same value should be set in the Result Document Field Sorting stage, in the Sort Field parameter. Default: ensemble_score

        The Result Document Field Sorting stage

        Finally, the Result Document Field Sorting stage sorts results based on vector distances obtained from the previous Vectors distance per Query/Document stage or ensemble score from Compute mathematical expression stage.

        • Sort Field - The field to use for document sorting. Default value: ensemble_score

        • Sort order - The sorting order: (asc)ending or (desc)ending. If cosine_similarity is used with higher values meaning higher similarity, then this should be set to desc. Default: desc.

        Configuring the number of clusters

        Another important parameter to choose is the Number of clusters parameter in the in the query pipeline. There are two options for utilizing the dense vectors from a query pipeline:

        • If set Number of clusters parameter to 0 (default option), then we use Solr to return the top x results, then re-rank using vector cosine similarity, or combine the Solr score with the vector score using our Compute mathematical expression stage. If you choose this option, then adjust Number of Results parameter (Default value 500) in the Rewrite Pagination Parameters for Reranking stage in the query pipeline. This parameter controls how many documents to be returned from Solr to Fusion and re-ranked under the hood. You can adjust this parameter to find a good balance between relevancy and query speed.

        • If you set the Number of clusters parameter to a value x greater than 0, when a query comes in, Fusion transfers the query into a dense vector and find the closest x clusters to which the query belongs. The pipeline then obtains documents from the same clusters as query and re-ranks them based on the vector cosine similarity between the query and the indexed answers/questions. The good default values can be 1 cluster per document and 10 clusters per query.

        Documents from the clusters might be obtained in the following ways: * By using Solr search. Then search space is narrowed by both clusters and Solr. And Solr score might be used in the ensemble in the same way as in the first option described above. * By using only clusters. Then all documents from certain clusters are retrieved for re-ranking. Solr doesn’t narrow the search space, but it’s also not used for ensemble.