Product Selector

Fusion 5.9
    Fusion 5.9

    Configure Neural Hybrid Search

    Neural Hybrid Search combines lexical-semantic search with semantic vector search.

    To use semantic vector search in Fusion, you need to configure Neural Hybrid Search. Then you can choose the balance between lexical and semantic vector search that works best for your use case.

    Before you begin, see Neural Hybrid Search for conceptual information that can help you understand how to configure this feature.

    This feature is currently only available to clients who have contracted with Lucidworks for features related to Neural Hybrid Search and Lucidworks AI.

    This feature is only available in Fusion 5.9.5 and later versions of Fusion 5.9.

    Lucidworks recommends setting up Neural Hybrid Search with Lucidworks AI, however you can instead use Ray or Seldon vector search. If using Lucidworks AI, you may use the default LWAI Neural Hybrid Search pipeline.

    This section explains how to configure vector search using Lucidworks AI, but you can also configure it using Ray or Seldon.

    Before you set up the Lucidworks AI index and query stages, make sure you have set up your Lucidworks AI Gateway integration.

    Configure the LWAI Vectorize Field index stage

    To vectorize the index pipeline fields:

    1. Sign in to Fusion and click Indexing > Index Pipelines.

    2. Click the pipeline you want to use.

    3. Click Add a new pipeline stage.

    4. In the AI section, click LWAI Vectorize Field.

    5. In the Label field, enter a unique identifier for this stage.

    6. In the Condition field, enter a script that results in true or false, which determines if the stage should process.

    7. In the Account Name field, select the Lucidworks AI API account name defined in the Lucidworks AI Gateway service.

      If you do not see your account name, check that your Lucidworks AI Gateway integration is correctly configured.

    8. In the Model field, select the Lucidworks AI model to use for encoding.

      If you do not see any models names and you are a non-admin Fusion user, check that you have these permissions: PUT,POST,GET:/LWAI-ACCOUNT-NAME/**

      Your Fusion account name must match the value of fusion.lwai.account[n].name in the Lucidworks AI Gateway integration YAML.

      For more information, see:

    9. In the Source field, enter the name of the string field where the value should be submitted to the model for encoding. If the field is blank or does not exist, this stage is not processed. Template expressions are supported.

    10. In the Destination field, enter the name of the field where the vector value from the model response is saved.

      If a value is entered in this field, the following information is added to the document:

      • {Destination Field} is the vector field.

      • {Destination Field}_b is the boolean value if the vector has been indexed.

    11. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. The useCaseConfig parameter that is common to embedding use cases is dataType, but each use case may have other parameters. The value for the query stage is query.

    12. Optionally, you can use the Model Configuration section for any additional parameters you want to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases. For more information, see Prediction API.

    13. Select the Fail on Error checkbox to generate an exception if an error occurs while generating a prediction for a document.

    14. Click Save.

    15. Index data using the new pipeline. Verify the vector field is indexed by confirming the field is present in documents.

    For reference information, see Lucidworks AI Vectorize Field.

    Configure the LWAI Vectorize Query stage

    To vectorize the query in the query pipeline:

    1. Sign in to Fusion and click Querying > Query Pipelines.

    2. Select the pipeline you want to use.

    3. Click Add a new pipeline stage.

    4. Click LWAI Vectorize Query.

    5. In the Label field, enter a unique identifier for this stage.

    6. In the Condition field, enter a script that results in true or false, which determines if the stage should process.

    7. In the Account Name field, select the name of the Lucidworks AI account.

    8. In the Model field, select the Lucidworks AI model to use for encoding. For more information, see:

    9. In the Query Input field, enter the location from which the query is retrieved.

    10. In the Output context variable field, enter the name of the variable where the vector value from the response is saved.

    11. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. The useCaseConfig parameter that is common to embedding use cases is dataType, but each use case may have other parameters. The value for the query stage is query.

    12. Optionally, you can use the Model Configuration section for any additional parameters you want to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases. For more information, see Prediction API.

    13. Select the Fail on Error checkbox to generate an exception if an error occurs during this stage.

    14. Click Save.

    The Top K setting is set to 100 by default. We recommend leaving this as 100 or setting it to 200.

    This query stage must be placed before the Solr Query stage. For more information, see Reorder Query Pipeline Stages.

    Using additional pipeline stages

    Vector Search does not support all available pipeline stages. At minimum, use the Solr Query and LWAI Vectorize Query stages. Do not use the Query Fields stage when setting up vector search.

    Modify Solr managed-schema (5.9.4 and earlier)

    This step is required if you’re migrating a collection from a version of Fusion that does not support Neural Hybrid Search. If creating a new collection in Fusion 5.9.5 and later, you can continue to Configure Hybrid Query stage.

    1. Go to System > Solr Config and then click managed-schema to edit it.

    2. Comment out <copyField dest="_text_" source="*"/> and add <copyField dest="text" source="*_t"/> below it. This will concatenate and index all *_t fields.

    3. Add the following code block to the managed-schema file:

      <fieldType class="solr.DenseVectorField" hnswBeamWidth=“200"
          hnswMaxConnections="45” name="knn_DIM_vector" similarityFunction="cosine"
          vectorDimension="DIM"/>
      <dynamicField docValues="false" indexed="true" multiValued="false" name="*_512v"
            required="false" stored="true" type="knn_DIM_vector"/>
      This example uses 512 vector dimension. If your model uses a different dimension, modify the code block to match your model. For example, _1024v. There is no limitation on supported vector dimensions.

    Configure neural hybrid queries

    In Fusion 5.9.10 and later, you use the Neural Hybrid Query stage to configure neural hybrid queries. In Fusion 5.9.9 and earlier, you use the Hybrid Query stage.

    Configure Neural Hybrid Query stage (5.9.10 and later)

    1. In the same query pipeline where you configured vector search, click Add a new pipeline stage, then select Neural Hybrid Query.

    2. In the Label field, enter a unique identifier for this stage or leave blank to use the default value.

    3. In the Condition field, enter a script that results in true or false, which determines if the stage should process, or leave blank.

    4. In the Lexical Query Input field, enter the location from which the lexical query is retrieved. For example, <request.params.q>. Template expressions are supported.

    5. In the Lexical Query Weight field, enter the relative weight of the lexical query. For example, 0.3. If this value is 0, no re-ranking will be applied using the lexical query scores.

    6. In the Lexical Query Squash Factor field, enter a value that will be used to squash the lexical query score.

      The squash factor controls how much difference there is between the top-scoring documents and the rest. It helps ensure that documents with slightly lower scores still have a chance to show up near the top. For this value, Lucidworks recommends entering the inverse of the lexical maximum score across all queries for the given collection.

    7. In the Vector Query Field, enter the name of the Solr field for k-nearest neighbor (KNN) vector search.

    8. In the Vector Input field, enter the location from which the vector is retrieved. Template expressions are supported. For example, a value of <ctx.vector> evaluates the context variable resulting from a previous stage, such as the LWAI Vectorize Query stage.

    9. In the Vector Query Weight field, enter the relative weight of the vector query. For example, 0.7.

    10. In the Min Return Vector Similarity field, enter the minimum vector similarity value to qualify as a match from the Vector portion of the hybrid query.

    11. In the Min Traversal Vector Similarity field, enter the minimum vector similarity value to use when walking through the graph during the Vector portion of the hybrid query.

    12. When enabled, the Compute Vector Similarity for Lexical-Only Matches setting computes vector similarity scores for documents in lexical search results but not in the initial vector search results. Select the checkbox to enable this setting.

    13. If you want to use pre-filtering:

      1. Uncheck Block pre-filtering.

        In the Javascript context (ctx), the preFilterKey object becomes available.

      2. Add a Javascript stage after the Neural Hybrid Query stage and use it to configure your pre-filter.

        The preFilter object adds both the top-level fq and preFilter to the parameters for the vector query. You do not need to manually add the top level fq in the javascript stage. See the example below:

        if(ctx.hasProperty("preFilterKey")) {
          var preFilter = ctx.getProperty("preFilterKey");
          preFilter.addFilter(filterQuery)
        }
    14. Click Save.

    Make sure the Hybrid Query stage is ordered before the Solr Query stage. See Reorder Query Pipeline Stages.

    Configure Hybrid Query stage (5.9.9 and earlier)

    If you’re setting up Neural Hybrid Search in Fusion 5.9.9 and earlier, use the Hybrid Query stage. If you’re using Fusion 5.9.10 or later, use the Neural Hybrid Query stage.

    1. In the same query pipeline where you configured vector search, click Add a new pipeline stage, then select Hybrid Query.

    2. In the Label field, enter a unique identifier for this stage or leave blank to use the default value.

    3. In the Condition field, enter a script that results in true or false, which determines if the stage should process, or leave blank.

    4. In the Lexical Query Input field, enter the location from which the lexical query is retrieved. For example, <request.params.q>. Template expressions are supported.

    5. In the Lexical Query Weight field, enter the relative weight of the lexical query. For example, 0.3. If this value is 0, no re-ranking will be applied using the lexical query scores.

    6. In the Number of Lexical Results field, enter the number of lexical search results to include in re-ranking. For example, 1000. A value is 0 is ignored.

    7. In the Vector Query Field, enter the name of the Solr field for k-nearest neighbor (KNN) vector search.

    8. In the Vector Input field, enter the location from which the vector is retrieved. Template expressions are supported. For example, a value of <ctx.vector> evaluates the context variable resulting from a previous stage, such as the LWAI Vectorize Query stage.

    9. In the Vector Query Weight field, enter the relative weight of the vector query. For example, 0.7.

    10. Select the Use KNN Query checkbox to use the knn query parser and configure its options. This option cannot be selected if Use VecSim Query checkbox is selected. In addition, Use KNN Query is used if neither Use KNN Query or Use VecSim Query is selected.

      1. If the Use KNN Query checkbox is selected, enter a value in the Number of Vector Results field. For example, 1000.

    11. Select the Use VecSim Query checkbox to use the vecSim query parser and configure its options. This option cannot be selected if Use KNN Query checkbox is selected.

      If the Use VecSim Query checkbox is selected, enter values in the following fields:

      • Min Return Vector Similarity. Enter the minimum vector similarity value to qualify as a match from the Vector portion of the hybrid query.

      • Min Traversal Vector Similarity. Enter the minimum vector similarity value to use when walking through the graph during the Vector portion of the hybrid query. The value must be lower than, or equal to, the value in the Min Return Vector Similarity field.

    12. In the Minimum Vector Similarity Filter, enter the value for a minimum similarity threshold for filtering documents. This option applies to all documents, regardless of other score boosting such as rules or signals.

    13. Click Save.

    Make sure the Hybrid Query stage is ordered before the Solr Query stage. See Reorder Query Pipeline Stages.

    Perform hybrid searches

    After setting up the stages, you can perform hybrid searches via the knn query parser as you would with Solr. Specify the search vector and include it in the query. For example, change the q parameter to a knn query parser string.

    You can also preview the results in the Query Workbench. Try a few different queries, and adjust the weights and parameters in the Hybrid Query stage to find the best balance between lexical and semantic vector search for your use case. You can also disable and re-enable the Neural Hybrid Query stage to compare results with and without it.

    XDenseVectorField is not supported in Fusion 5.9.5 and above. Instead, use DenseVectorField.

    Troubleshoot inconsistent results

    Neural Hybrid Search leverages Solr semantic vector search, which has known behaviors which can be inconsistent at query time. These behaviors include score fluctuations with re-querying, documents showing and disappearing on re-querying, and (when SVS is configured without Hybrid stages) completely unfindable documents. This section outlines possible reasons for inconsistent behavior and resolutions steps.

    NRT replicas and HNSW graph challenges

    Lucidworks recommends using PULL and TLOG replicas. These replica types copy the index of the leader replica, which results in the same HNSW graph on every replica. When querying, the HNSW approximation query will be consistent given a static index.

    In contrast, NRT replicas have their own index, so they will also have their own HNWS graph. HNSW is an Approximate Nearest Neighbor (ANN) algorithm, so it will not return exactly the same results for differently constructed graphs. This means that queries performed can and will return different results per HNWS graph (number of NRT replicas in a shard) which can lead to noticeable result shifts. When using NRT replicas, the shifts can be made less noticeable by increasing the topK parameter. Variation will still occur, but it should be lower in the documents. Another way to mitigate shifts is to use Neural Hybrid Search with a vector similarity cutoff.

    For more information, refer to Solr Types of Replicas.

    In the case of Neural Hybrid Search, lexical BM25 & TF-IDF score differences that can occur with NRT replicas because of index differences for deleted documents, can also affect combined Hybrid score. If you choose to use NRT replicas then it is possible that any lexical and/or semantic vectors variations can and will be exacerbated.

    Orphaning (Disconnected Nodes)

    Solr’s implementation of dense vector search depends on the Lucene implementation of HNSW ANN. The Lucene implementation has a known issue where, in some collections, nodes in the HNSW graph become unreachable via graph traversal, essentially becoming disconnected or “orphaned.”

    Identify orphaning

    Run the following command to identify orphaning:

    curl -sS -u 'USERNAME:PASSWORD' 'https://FUSION_HOST:FUSION_PORT/api/solrAdmin/default/COLLECTION_NAME/select'\
      --form-string 'fl=id,vecSim:$vecSim' \
      --form-string 'rows=1' \
      --form-string 'q=(*:* -{!knn f=VECTOR_FIELD topK=999999 v=$vec})' \
      --form-string 'vecSim=vectorSimilarity(VECTOR_FIELD,$vec)' \
      --form-string 'vec=COMPATIBLE_VECTOR'
    If the collection doesn’t have a vector for every document, include a filter so only the documents that have vectors are included. Filter on the boolean vector, as in this example: --form-string 'fq=VECTOR_FIELD_b:true' \

    Construct a KNN exclusion query where topK is higher than the number of vectors in your collection If the number of vectors in your collection exceeds 999,999 then increase the value to be at least equal to that value.

    If any are documents returned, there are orphans, and the ids you see are the orphans. Proceed to Resolving orphans. If no documents are returned, there are likely no orphans. You can try a few varying vectors to be certain.

    Resolving orphans

    To resolve orphans, do the following:

    1. Increase the HNSW Solr schema parameters hnswBeamWidth and hnswMaxConnections per the Suggested values below.

    2. Save the schema.

    3. Clear the index.

    4. Re-index your collection.

    Suggested values
    Orphaning rate hnswBeamWidth hnswMaxConnections

    5% or less

    300

    64

    5% - 25%

    500

    100

    25% or more

    3200

    512