Index pipeline stage configuration specifications
Develop and Deploy a Machine Learning Model
pip install seldon-core
docker run
with a specified port, like 9000, which you can then curl
to confirm functionality in Fusion.
See the testing example below.torch.save(model, PATH)
function.
See Saving and Loading Models in the PyTorch documentation.init
: The init
function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init
.
predict
: The predict
function processes the field or query that Fusion passes to the model.
The predict
function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate()
, model.predict()
, or equivalent function to get the expected model result.
If the output needs additional manipulation, that should be done before the result is returned.
For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
mini.py
and the class name is mini()
.requirements.txt
file is a list of installs for the Dockerfile
to run to ensure the Docker container has the right resources to run the model.import
statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:pip freeze
, you must manually add seldon-core
to the requirements file because it is not invoked in the Python file but is required for containerization.<your_model>.py
, Dockerfile
, and requirements.txt
files, you need to run a few Docker commands.
Run the commands below in order:Parameter | Description |
---|---|
Job ID | A string used by the Fusion API to reference the job after its creation. |
Model name | A name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service. |
Model replicas | The number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake. |
Docker Repository | The public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here. |
Image name | The name of the image with an optional tag. If no tag is given, latest is used. |
Kubernetes secret | If you’re using a private repository, supply the name of the Kubernetes secret used for access. |
Output columns | A list of column names that the model’s predict method returns. |
<seldon_model_name>_sdep.yaml
file.kubectl get sdep
gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml
adds the key to the Seldon Deployment job the next time it launches.
sdep
before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
Inner Product
, but this also depends on use case and model type.\t
for the tab character. When entering configuration values in the API, use escaped characters, such as \\t
for the tab character.