Feature | Deprecated | Expected Removal | Notes |
---|---|---|---|
Hybrid Query (5.9.9 and earlier): A stage to perform hybrid lexical-semantic search that combines BM25-type lexical search with KNN dense vector search via Solr. | 5.9.13 | TBD | Use Neural Hybrid Query (5.9.10 and later) instead to benefit from updates and improvements to the search capabilities of the stage. |
App Insights: A tool within the Fusion workspace that provides real-time, searchable reports and visualizations from signals data. | 5.9.13[1] | TBD or no earlier than October 22, 2025 | - |
Parsers Indexing CRUD API: The Parsers API provides CRUD operations for parsers, allowing users to create, read, update, and delete parsers. | 5.9.11[1] | TBD or no earlier than September 4, 2025 | A new API introduced in Fusion 5.12.0, Async Parsing API, replaces the Parsers Indexing CRUD API and is available in Fusion 5.9.11. This API provides improved functionality and aligns with Fusion’s updated architecture, ensuring consistency across versions. |
Smart Answers Coldstart Training job: The Smart Answers Coldstart Training job in Fusion is designed to help train models when there is no historical training data available. | 5.9.13[1] | TBD or no earlier than October 22, 2025 | Use pre-trained models or the supervised training job instead of the Smart Answers Coldstart Training job. Pre-trained models eliminate the need for manual training when historical data is unavailable, while supervised training jobs offer greater flexibility in model customization. |
Data Augmentation Job: The Data Augmentation Job is designed to enhance training and testing data for machine learning models by increasing data quantity and introducing textual variations. It performs tasks such as backtranslation, synonym substitution, keystroke misspelling, and split word tasks. | 5.9.13[1] | TBD or no earlier than October 22, 2025 | - |
Webapps Service: The Webapps service provides an embedded instance of App Studio within each Fusion instance, simplifying the deployment process. | 5.9.10[2] | Version TBD, but no earlier than August 20, 2025 | Deploy App Studio Enterprise using the Deploy App Studio Enterprise to a Fusion 5 Cluster (GKE) instead of relying on the Webapps service. This method improves scalability and provides a more robust deployment approach for enterprise environments. |
Hybrid Query (5.9.9 and earlier): A stage to perform hybrid lexical-semantic search that combines BM25-type lexical search with KNN dense vector search via Solr. | 5.9.13 | TBD | Use Neural Hybrid Query (5.9.10 and later) instead to benefit from updates and improvements to the search capabilities of the stage. |
Support for Nashorn Javascript engine: Fusion uses the Nashorn engine JavaScript engine for the JavaScript index and query stages. | 5.9.8 | TBD or no earlier than July 7, 2025 | Use the OpenJDK Nashorn JavaScript engine instead of the deprecated Nashorn JavaScript engine in Fusion. This ensures continued JavaScript execution compatibility in pipeline configurations. You can select the engine from a dropdown in the pipeline views or in the workbenches. |
Milvus Ensemble Query Stage: The Milvus Ensemble Query stage is used to enhance search results by incorporating vector-based similarity scoring. | 5.9.5 | TBD or no later than May 4, 2025 | Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These alternatives improve vector search integration and support within Fusion’s evolving AI and machine learning capabilities. |
Milvus Query Stage: The Milvus Query stage performs vectors similarity search in Milvus, an open source vector similarity search engine integrated into Fusion to streamline its deep learning capabilities and reduce the workload on Solr. | 5.9.5 | TBD or no later than May 4, 2025 | Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These options enhance query efficiency and provide broader support for machine learning-driven search. |
Milvus Response Update Query Stage: The Milvus Response Update stage is designed to update response documents with vectors similarity and ensemble scores. | 5.9.5 | TBD or no later than May 4, 2025 | Replace the Milvus Ensemble Query Stage with Seldon or Lucidworks AI vector query stages. These alternatives improve performance when updating response documents with vector similarity data. |
Domain-Specific Language (DSL): The Domain Specific Language (DSL) in Fusion is designed to simplify the complexity of crafting search queries. It allows users to express complex search queries without needing to understand the intricate syntax required by the legacy Solr parameter format. | 5.9.4 | TBD | Avoid using the Domain-Specific Language (DSL) feature, as it may cause performance degradation. Instead, use the DSL to Legacy Parameters query pipeline stage to convert DSL requests to the legacy Solr format while maintaining compatibility. |
Security Trimming Query Stage: The Security Trimming query pipeline stage in Fusion is designed to restrict query resultsby matching security ACL metadata, ensuring that users only see results they are authorized to access. | 5.9.0 | TBD | Replace the Security Trimming Query Stage with the Graph Security Trimming Query Stage. The new method uses a single filter query across all data sources. |
Field Parser Index Stage: The Field Parser index pipeline stage in Fusion is designed to parse content embedded within fields of documents. This stage operates separately from the parsers that handle whole documents. | All versions of 5.9.x[3] | TBD | Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents. |
Tika Server Parser: Apache Tika Server is a versatile parser that supports parsing many document formats designed for Enterprise Search crawls. This stage is not compatible with asynchronous Tika parsing. | All versions of 5.9.x[3] | 5.9.12 | Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents. |
Apache Tika Parser: The Apache Tika Parser is a versatile tool designed to support the parsing of numerous unstructured document formats. | All versions of 5.9.x[3] | TBD | Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents. |
Logistic Regression Classifier Training Jobs: This job trains a logistic regression model with regularization to classify text into different categories. | All versions of 5.9.x[520] | TBD | Replace Logistic Regression Classifier Training Jobs with the Classification job. This alternative provides expanded configuration options and improved logging capabilities. |
MLeap deployments of SpaCy and SparkNLP | 5.9.10[520] | 5.9.12 | Use Develop and Deploy a Machine Learning Model instead. |
MLeap in Machine Learning models | 5.9.10[520] | 5.9.12 | - |
Query-to-Query Collaborative Similarity Job: This job uses SparkML’s Alternating Least Squares (ALS) to analyze past queries and find similarities between them. It helps recommend related queries or suggest relevant items based on previous searches. | All versions of 5.9.x[520] | TBD | Switch to Query-to-Query Session-Based Similarity jobs instead of the Query-to-Query Collaborative Similarity Job. The new method improves performance and increases the coverage of query similarity calculations. |
Random Forest Classifier Training Jobs: This job trains a machine learning model using a random forest algorithm to classify text into different categories. | All versions of 5.9.x[520] | TBD | Use the Classification job instead of Random Forest Classifier Training Jobs. This alternative provides enhanced configurability and better logging for improved model training. |
Time-based partitioning: Time-based partitioning in Fusion collections allows mapping to multiple Solr collections or partitions based on specific time ranges. | All versions of 5.9.x[520] | TBD | - |
Word2Vec Model Training Jobs: The Word2Vec model training job trains a shallow neural model to generate vector embeddings for text data and stores the results in a specified output collection. It supports configurable parameters for input data, model tuning, featurization, and output settings. | 5.9.11[520] | TBD or no earlier than September 4, 2025 | - |
Connectors fetcher property AccessControlFetcher : Connectors that support security filtering previously used separate fetchers for content and access control. One fetcher type is now used for both content and security fetching. AccessControlFetcher has been deprecated. | All versions of 5.9.x[4] | TBD | Fetcher implementations that use AccessControlFetcher should instead use ContentFetcher . |
Messaging Stage Configs | All versions of 5.9.x[4] | TBD | - |
Logistic Regression Classifier Training Jobs: This job trains a logistic regression model with regularization to classify text into different categories. | All versions of 5.9.x[520] | TBD | Replace Logistic Regression Classifier Training Jobs with the Classification job. This alternative provides expanded configuration options and improved logging capabilities. |
MLeap deployments of SpaCy and SparkNLP | 5.9.10[520] | 5.9.12 | Use Develop and Deploy a Machine Learning Model instead. |
Query-to-Query Collaborative Similarity Job: This job uses SparkML’s Alternating Least Squares (ALS) to analyze past queries and find similarities between them. It helps recommend related queries or suggest relevant items based on previous searches. | All versions of 5.9.x[520] | TBD | Switch to Query-to-Query Session-Based Similarity jobs instead of the Query-to-Query Collaborative Similarity Job. The new method improves performance and increases the coverage of query similarity calculations. |
Random Forest Classifier Training Jobs: This job trains a machine learning model using a random forest algorithm to classify text into different categories. | All versions of 5.9.x[520] | TBD | Use the Classification job instead of Random Forest Classifier Training Jobs. This alternative provides enhanced configurability and better logging for improved model training. |
Time-based partitioning: Time-based partitioning in Fusion collections allows mapping to multiple Solr collections or partitions based on specific time ranges. | All versions of 5.9.x[520] | TBD | - |
Word2Vec Model Training Jobs: The Word2Vec model training job trains a shallow neural model to generate vector embeddings for text data and stores the results in a specified output collection. It supports configurable parameters for input data, model tuning, featurization, and output settings. | 5.9.11[520] | TBD or no later than September 4, 2025 | - |
Connectors fetcher property AccessControlFetcher : Connectors that support security filtering previously used separate fetchers for content and access control. One fetcher type is now used for both content and security fetching. AccessControlFetcher has been deprecated. | All versions of 5.9.x[4] | TBD | Fetcher implementations that use AccessControlFetcher should instead use ContentFetcher . |
Messaging Stage Configs | All versions of 5.9.x[4] | TBD | - |
Deploy App Studio Enterprise to a Fusion 5 Cluster (GKE)
fusion.conf
file points to the IP or URI and port of the proxy service.Run the App Studio Enterprise application locally and verify functioning security and search features with the cluster you are deploying to.dockerfile
. Create the App Studio Enterprise Docker image:
APP_NAME
with the name of your application. Replace PATH
with the path to build from.
LOCAL_PORT
with the port on your local machine that can access the app. Replace APP_NAME
with the ASE application name.
CLUSTER_NAME
with your existing Fusion 5 cluster’s name.
Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.Feature | Removed | Notes |
---|---|---|
Fusion 5.11 | ||
MLeap | 5.11.0 | Use Develop and Deploy a Machine Learning Model instead. This method provides a more flexible and modern approach to deploying machine learning models. |
Subscriptions UI | 5.11.0 | - |
Fusion 5.10 | ||
Forked Apache Tika Parser | 5.10.0 | The Forked Apache Tika Parser was replaced by the Tika Asynchronous Parser. The asynchronous parser improves performance by handling document parsing more efficiently and scaling better for enterprise workloads. |
Analytics Catalog Query Stage | 5.10.0 | - |
Fusion 5.7 | ||
NLP Annotator Index Stage | 5.7.0 | To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example. |
NLP Annotator Query Stage | 5.7.0 | To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example. |
OpenNLP NER Extraction Index Stage | 5.7.0 | To implement similar functionality, see the Develop and Deploy a Machine Learning Model guide, which provides an adaptable example. |
Fusion 5.9 | ||
Tika Server Parser | 5.9.12 | Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents. |
MLeap in Machine Learning models | 5.9.12 | - |
Fusion 5.6 | ||
Fusion SQL | 5.6.1 | - |
Apache Pulsar | 5.6.0 | Apache Pulsar was removed in Fusion 5.6.0 and replaced with Kafka. Kafka offers better scalability, reliability, and industry support for message streaming. |
Log Viewer & DevOps Center UI panel | 5.6.0 | These features were removed in Fusion 5.6.0 as they depended on Apache Pulsar, which has been replaced by Kafka. Users should transition to Kafka-based logging and monitoring solutions. |
Subscriptions API | 5.6.0 | - |
Send to Message Bus Index Stage | 5.6.0 | - |
Fusion 5.5 | ||
Jupyter | 5.5.2 | - |
Superset | 5.5.2 | - |
Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.