Product Selector

Fusion 5.9
    Fusion 5.9

    Configure the LWAI Prediction query stageLucidworks AI

    The LWAI Prediction AI query stage is a Fusion pipeline query stage that enriches your search results with Generative AI predictions.

    For reference information, see LWAI Prediction query stage. For the LWAI Prediction index stage, see Configure the LWAI Prediction index stage.

    To configure this stage:

    1. Sign in to Fusion and click Querying > Query Pipelines.

    2. Click Add+ to add a new pipeline.

    3. Enter the name in Pipeline ID.

    4. Click Add a new pipeline stage.

    5. In the AI section, click LWAI Prediction.

    6. In the Label field, enter a unique identifier for this stage.

    7. In the Condition field, enter a script that results in true or false, which determines if the stage should process.

    8. In the Account Name field, select the name of the Lucidworks AI integration defined when the integration was created.

    9. In the Use Case field, select the Lucidworks AI use case to associate with this stage.

      • To generate a list of the use cases for your organization, see Use Case API.

      • The available use cases are described in Prediction API.

    10. In the Model field, select the Lucidworks AI model to associate with this stage. For more information, see:

    11. In the Input context variable variable field, enter the name of the variable in context to be used as input. Template expressions are supported.

    12. In the Destination variable name and context output field, enter the name that will be used as both the query response header in the prediction results and the context variable that contains the prediction.

      • If a value is entered in this field:

        • {destination name}_t is the full response.

        • In the context:

          • _lw_ai_properties_ss contains the Lucidworks account, boolean setting for async, use case, input for the call, and the collection.

      • If no value is entered in this field:

        • lw_ai{use case}_t is the response.response object, which is the raw model output.

        • lw_ai{use case}_response_s is the full response.

    13. Select the Include Response Documents? check box to include the response documents in the Lucidworks AI request. This option is only available for certain use cases. If this is selected, run the Solr Query stage to ensure documents exist before running the LWAI Prediction query stage.

      Response documents must be included in the RAG use case, which supports attaching a maximum of 3 response documents. To prevent errors, enter all of the entries described in the Document Field Mappings section.

    14. In the Document Field Mappings section, enter the LW AI Document field name and its corresponding Response document field name to map from input documents to the fields accepted by the Prediction API RAG use case. The fields are described in the Prediction API.

      If information is not entered in this section, the default mappings are used.

      • The body and source fields are required.

        • body - description_t is the contents of the document.

        • source - link_t is the URL/ID of the document.

      • The title and date fields are optional.

        • title - title_t is the title of the document.

        • date - _lw_file_modified_tdt is the creation date of the document in epoch time format.

    15. In the Use Case Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI.

      • The useCaseConfig parameter is only applicable to certain use cases. For more information, see the Async Prediction API and the Prediction API.

      • The memoryUuid parameter is required in the Standalone Query Rewriter use case, and is optional in the RAG use case.

        For more information, see Prediction API.

    16. In the Model Configuration section, click the + sign to enter the parameter name and value to send to Lucidworks AI. Several modelConfig parameters are common to generative AI use cases. For more information, see Prediction API.

    17. In the API Key field, enter the secret value specified in the external model. For:

      • OpenAI models, "apiKey" is the value in the model’s "[OPENAI_API_KEY]" field. For more information, see Authentication API keys.

      • Azure OpenAI models, "apiKey" is the value generated by Azure in either the model’s "[KEY1 or KEY2]" field. For requirements to use Azure models, see Generative AI models.

      • Google VertexAI models, "apiKey" is the value in the model’s

        "[BASE64_ENCODED_GOOGLE_SERVICE_ACCOUNT_KEY]" field. For more information, see Create and delete Google service account keys.

    18. Select the Fail on Error checkbox to generate an exception if an error occurs during this stage.

    19. Click Save.