Product Selector

Fusion 5.9
    Fusion 5.9

    Set Up a Pre-Trained Cold Start Model for Smart Answers

    Lucidworks provides these pre-trained cold start models for Smart Answers:

    • qna-coldstart-large - this is a large model trained on variety of corpuses and tasks.

    • qna-coldstart-multilingual - covers 16 languages. List of supported languages: Arabic, Chinese-simplified, Chinese-traditional, English, French, German, Italian, Japanese, Korean, Dutch, Polish, Portuguese, Spanish, Thai, Turkish, Russian.

    When you use these models, you do not need to run the model training job. Instead, you run a job that deploys the model into Managed Fusion. The Create Seldon Core Model Deployment job deploys your model as a Docker image in Kubernetes, which you can scale up or down like other Managed Fusion services.

    These models are a good basis for a cold start solution if your data does not contain much domain-specific terminology. Otherwise, consider training a model using your existing content.

    Dimension size of vectors for both models is 512. You might need this information when creating collections in Milvus.

    1. Deploy a pre-trained cold-start model into Managed Fusion

    The pre-trained cold-start models are deployed using a Managed Fusion job called Create Seldon Core Model Deployment. This job downloads the selected pre-trained model and installs it in Managed Fusion.

    1. Navigate to Collections > Jobs.

    2. Select Add > Create Seldon Core Model Deployment.

    3. Enter a Job ID, such as deploy-qna-coldstart-multilingual or deploy-qna-coldstart-large.

    4. Enter the Model Name, one of the following:

      • qna-coldstart-multilingual

      • qna-coldstart-large

    5. In the Docker Repository field, enter lucidworks.

    6. In the Image Name field, enter one of the following:

      • qna-coldstart-multilingual:v1.1

      • qna-coldstart-large:v1.1

    7. Leave the Kubernetes Secret Name for Model Repo field empty.

    8. In the Output Column Names for Model field, enter one of the following:

      • qna-coldstart-multilingual:[vector]

      • qna-coldstart-large:[vector, compressed_vector]

    9. Click Save.

    10. Click Run > Start to start the deployment job.