Skip to main content
This document provides a comprehensive list of deprecated and removed features in Fusion, organized by version. It includes details on the deprecation and expected removal of various features, along with notes on alternatives or replacements where applicable. The document also contains links to relevant documentation for further information. The deprecations section lists features scheduled for removal in future releases, and the removals section lists features already removed in past releases. Each entry includes the feature name, the version in which it was deprecated or removed, and relevant notes or recommended replacements. The deprecation schedule outlined in this document applies exclusively to Fusion 5. Other major versions of Fusion have separate deprecation schedules not covered by this document.
Fusion Connectors deprecations and removalsDeprecation and removal details for Fusion Connectors are found at Fusion Connectors Deprecations and Removals.

Deprecations

Deprecated features in Fusion 5 are scheduled for removal in a future release. The following table lists each deprecated feature with an expected removal value, either a version number or a date. When a version number is specified, the feature will be removed in that specific release. When a date is specified, it represents the earliest possible day the feature will remain available. Removal will not occur before that date.
FAQWhen were these features originally deprecated?If a feature was originally deprecated in a prior version, the original deprecation version is noted as a footnote in the Deprecated column.
FeatureDeprecatedExpected RemovalNotes
Hybrid Query (5.9.9 and earlier): A stage to perform hybrid lexical-semantic search that combines BM25-type lexical search with KNN dense vector search via Solr.5.9.13TBDUse Neural Hybrid Query (5.9.10 and later) instead to benefit from updates and improvements to the search capabilities of the stage.
App Insights: A tool within the Fusion workspace that provides real-time, searchable reports and visualizations from signals data.5.9.13[1]TBD or no earlier than October 22, 2025-
Parsers Indexing CRUD API: The Parsers API provides CRUD operations for parsers, allowing users to create, read, update, and delete parsers.5.9.11[1]TBD or no earlier than September 4, 2025A new API introduced in Fusion 5.12.0, Async Parsing API, replaces the Parsers Indexing CRUD API and is available in Fusion 5.9.11. This API provides improved functionality and aligns with Fusion’s updated architecture, ensuring consistency across versions.
Smart Answers Coldstart Training job: The Smart Answers Coldstart Training job in Fusion is designed to help train models when there is no historical training data available.5.9.13[1]TBD or no earlier than October 22, 2025Use pre-trained models or the supervised training job instead of the Smart Answers Coldstart Training job. Pre-trained models eliminate the need for manual training when historical data is unavailable, while supervised training jobs offer greater flexibility in model customization.
Data Augmentation Job: The Data Augmentation Job is designed to enhance training and testing data for machine learning models by increasing data quantity and introducing textual variations. It performs tasks such as backtranslation, synonym substitution, keystroke misspelling, and split word tasks.5.9.13[1]TBD or no earlier than October 22, 2025-
Webapps Service: The Webapps service provides an embedded instance of App Studio within each Fusion instance, simplifying the deployment process.5.9.10[2]Version TBD, but no earlier than August 20, 2025Deploy App Studio Enterprise using the Deploy App Studio Enterprise to a Fusion 5 Cluster (GKE) instead of relying on the Webapps service. This method improves scalability and provides a more robust deployment approach for enterprise environments.
Hybrid Query (5.9.9 and earlier): A stage to perform hybrid lexical-semantic search that combines BM25-type lexical search with KNN dense vector search via Solr.5.9.13TBDUse Neural Hybrid Query (5.9.10 and later) instead to benefit from updates and improvements to the search capabilities of the stage.
Support for Nashorn Javascript engine: Fusion uses the Nashorn engine JavaScript engine for the JavaScript index and query stages.5.9.8TBD or no earlier than July 7, 2025Use the OpenJDK Nashorn JavaScript engine instead of the deprecated Nashorn JavaScript engine in Fusion. This ensures continued JavaScript execution compatibility in pipeline configurations. You can select the engine from a dropdown in the pipeline views or in the workbenches.
Milvus Ensemble Query Stage: The Milvus Ensemble Query stage is used to enhance search results by incorporating vector-based similarity scoring.5.9.5TBD or no later than May 4, 2025Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These alternatives improve vector search integration and support within Fusion’s evolving AI and machine learning capabilities.
Milvus Query Stage: The Milvus Query stage performs vectors similarity search in Milvus, an open source vector similarity search engine integrated into Fusion to streamline its deep learning capabilities and reduce the workload on Solr.5.9.5TBD or no later than May 4, 2025Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These options enhance query efficiency and provide broader support for machine learning-driven search.
Milvus Response Update Query Stage: The Milvus Response Update stage is designed to update response documents with vectors similarity and ensemble scores.5.9.5TBD or no later than May 4, 2025Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These alternatives improve performance when updating response documents with vector similarity data.
Domain-Specific Language (DSL): The Domain Specific Language (DSL) in Fusion is designed to simplify the complexity of crafting search queries. It allows users to express complex search queries without needing to understand the intricate syntax required by the legacy Solr parameter format.5.9.4TBDAvoid using the Domain-Specific Language (DSL) feature, as it may cause performance degradation. Instead, use the DSL to Legacy Parameters query pipeline stage to convert DSL requests to the legacy Solr format while maintaining compatibility.
Security Trimming Query Stage: The Security Trimming query pipeline stage in Fusion is designed to restrict query resultsby matching security ACL metadata, ensuring that users only see results they are authorized to access.5.9.0TBDReplace the Security Trimming Query Stage with the Graph Security Trimming Query Stage. The new method uses a single filter query across all data sources.
Field Parser Index Stage: The Field Parser index pipeline stage in Fusion is designed to parse content embedded within fields of documents. This stage operates separately from the parsers that handle whole documents.All versions of 5.9.x[3]TBDUse the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents.
Tika Server Parser: Apache Tika Server is a versatile parser that supports parsing many document formats designed for Enterprise Search crawls. This stage is not compatible with asynchronous Tika parsing.All versions of 5.9.x[3]5.9.12Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents.
Apache Tika Parser: The Apache Tika Parser is a versatile tool designed to support the parsing of numerous unstructured document formats.All versions of 5.9.x[3]TBDUse the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents.
Logistic Regression Classifier Training Jobs: This job trains a logistic regression model with regularization to classify text into different categories.All versions of 5.9.x[520]TBDReplace Logistic Regression Classifier Training Jobs with the Classification job. This alternative provides expanded configuration options and improved logging capabilities.
MLeap deployments of SpaCy and SparkNLP5.9.10[520]5.9.12Use Develop and Deploy a Machine Learning Model instead.
MLeap in Machine Learning models5.9.10[520]5.9.12-
Query-to-Query Collaborative Similarity Job: This job uses SparkML’s Alternating Least Squares (ALS) to analyze past queries and find similarities between them. It helps recommend related queries or suggest relevant items based on previous searches.All versions of 5.9.x[520]TBDSwitch to Query-to-Query Session-Based Similarity jobs instead of the Query-to-Query Collaborative Similarity Job. The new method improves performance and increases the coverage of query similarity calculations.
Random Forest Classifier Training Jobs: This job trains a machine learning model using a random forest algorithm to classify text into different categories.All versions of 5.9.x[520]TBDUse the Classification job instead of Random Forest Classifier Training Jobs. This alternative provides enhanced configurability and better logging for improved model training.
Time-based partitioning: Time-based partitioning in Fusion collections allows mapping to multiple Solr collections or partitions based on specific time ranges.All versions of 5.9.x[520]TBD-
Word2Vec Model Training Jobs: The Word2Vec model training job trains a shallow neural model to generate vector embeddings for text data and stores the results in a specified output collection. It supports configurable parameters for input data, model tuning, featurization, and output settings.5.9.11[520]TBD or no earlier than September 4, 2025-
Connectors fetcher property AccessControlFetcher: Connectors that support security filtering previously used separate fetchers for content and access control. One fetcher type is now used for both content and security fetching. AccessControlFetcher has been deprecated.All versions of 5.9.x[4]TBDFetcher implementations that use AccessControlFetcher should instead use ContentFetcher.
Messaging Stage ConfigsAll versions of 5.9.x[4]TBD-
Logistic Regression Classifier Training Jobs: This job trains a logistic regression model with regularization to classify text into different categories.All versions of 5.9.x[520]TBDReplace Logistic Regression Classifier Training Jobs with the Classification job. This alternative provides expanded configuration options and improved logging capabilities.
MLeap deployments of SpaCy and SparkNLP5.9.10[520]5.9.12Use Develop and Deploy a Machine Learning Model instead.
Query-to-Query Collaborative Similarity Job: This job uses SparkML’s Alternating Least Squares (ALS) to analyze past queries and find similarities between them. It helps recommend related queries or suggest relevant items based on previous searches.All versions of 5.9.x[520]TBDSwitch to Query-to-Query Session-Based Similarity jobs instead of the Query-to-Query Collaborative Similarity Job. The new method improves performance and increases the coverage of query similarity calculations.
Random Forest Classifier Training Jobs: This job trains a machine learning model using a random forest algorithm to classify text into different categories.All versions of 5.9.x[520]TBDUse the Classification job instead of Random Forest Classifier Training Jobs. This alternative provides enhanced configurability and better logging for improved model training.
Time-based partitioning: Time-based partitioning in Fusion collections allows mapping to multiple Solr collections or partitions based on specific time ranges.All versions of 5.9.x[520]TBD-
Word2Vec Model Training Jobs: The Word2Vec model training job trains a shallow neural model to generate vector embeddings for text data and stores the results in a specified output collection. It supports configurable parameters for input data, model tuning, featurization, and output settings.5.9.11[520]TBD or no later than September 4, 2025-
Connectors fetcher property AccessControlFetcher: Connectors that support security filtering previously used separate fetchers for content and access control. One fetcher type is now used for both content and security fetching. AccessControlFetcher has been deprecated.All versions of 5.9.x[4]TBDFetcher implementations that use AccessControlFetcher should instead use ContentFetcher.
Messaging Stage ConfigsAll versions of 5.9.x[4]TBD-
This article explains how to deploy App Studio Enterprise (ASE) to an existing Fusion 5 cluster in Google Kubernetes Engine (GKE) without using the Webapps service.Before completing this guide, ensure that your fusion.conf file points to the IP or URI and port of the proxy service.Run the App Studio Enterprise application locally and verify functioning security and search features with the cluster you are deploying to.

Prepare the package

  1. Package your app into a single JAR file:
    ./app-studio package
    
  2. App Studio Enterprise includes a dockerfile. Create the App Studio Enterprise Docker image:
    docker build PATH -t APP_NAME
    
    Set or replace APP_NAME with the name of your application. Replace PATH with the path to build from.
  3. You can test the Docker image locally with the following command:
    docker run -it -p LOCAL_PORT:8080 APP_NAME
    
    Set or replace LOCAL_PORT with the port on your local machine that can access the app. Replace APP_NAME with the ASE application name.

Publish the image

You can publish the Docker image anywhere that supports Docker images. This section explains how to publish a Docker image to Google Kubernetes Engine.
  1. Tag your container for the registry:
    docker tag APP_NAME gcr.io/PROJECT_NAME/APP_NAME
    
  2. Push your Docker image to the Google Container Registry:
    docker push gcr.io/PROJECT_NAME/APP_NAME
    
  3. Verify the image:
    gcloud compute instances list
    

Deploy the app to a cluster

After publishing your ASE Docker image, deploy the image to a cluster. Your existing Fusion 5 cluster is a good choice.
  1. Switch the context to your Fusion 5 cluster:
    gcloud container clusters get-credentials CLUSTER_NAME
    
    Replace CLUSTER_NAME with your existing Fusion 5 cluster’s name.
  2. Create a deployment in your cluster using the image you published:
    kubectl create deployment APP_NAME --image=gcr.io/PROJECT_NAME/APP_NAME:latest
    

Create an Ingress resource

After deploying the app, create an Ingress resource for your ASE instance. If you require more Ingress rules, your Ingress resource will look different. Learn more about Ingress resources.
  1. Use the following command to create a minimal Ingress resource:
    cat <<EOF | kubectl apply -f -
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: ase-ingress
    spec:
      backend:
        serviceName: $AppName
        servicePort: 80
    EOF
    
  2. Verify the Ingress resource:
    kubectl get ingress ase-ingress
    
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.

Removals

This section lists features that have been removed in past releases of Fusion. Each entry includes the feature name, the version in which it was removed, and any relevant notes or replacement recommendations.
FeatureRemovedNotes
Fusion 5.11
MLeap5.11.0Use Develop and Deploy a Machine Learning Model instead. This method provides a more flexible and modern approach to deploying machine learning models.
Subscriptions UI5.11.0-
Fusion 5.10
Forked Apache Tika Parser5.10.0The Forked Apache Tika Parser was replaced by the Tika Asynchronous Parser. The asynchronous parser improves performance by handling document parsing more efficiently and scaling better for enterprise workloads.
Analytics Catalog Query Stage5.10.0-
Fusion 5.7
NLP Annotator Index Stage5.7.0To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example.
NLP Annotator Query Stage5.7.0To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example.
OpenNLP NER Extraction Index Stage5.7.0To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example.
Fusion 5.9
Tika Server Parser5.9.12Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents.
MLeap in Machine Learning models5.9.12-
Fusion 5.6
Fusion SQL5.6.1-
Apache Pulsar5.6.0Apache Pulsar was removed in Fusion 5.6.0 and replaced with Kafka. Kafka offers better scalability, reliability, and industry support for message streaming.
Log Viewer & DevOps Center UI panel5.6.0These features were removed in Fusion 5.6.0 as they depended on Apache Pulsar, which has been replaced by Kafka. Users should transition to Kafka-based logging and monitoring solutions.
Subscriptions API5.6.0-
Send to Message Bus Index Stage5.6.0-
Fusion 5.5
Jupyter5.5.2-
Superset5.5.2-
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.

  1. Originally deprecated in Fusion 5.12.0.
  1. Originally deprecated in Fusion 5.11.0.
  1. Originally deprecated in Fusion 5.8.0.
  1. Originally deprecated in Fusion 5.1.2.
I