Skip to main content
Released on February 20, 2025, this maintenance release introduces a new Neural Hybrid Search query pipeline stage for Lucidworks AI, support for Kubernetes 1.31, Spark 3.4.1, security updates, and bug fixes.
For supported Kubernetes versions and key component versions, see Platform support and component versions.

Key highlights

Introducing the Neural Hybrid Query stage for improved relevance

Managed Fusion 5.9.10 introduces the Neural Hybrid Query stage, enhancing Neural Hybrid Search (NHS) by refining how semantic and lexical scores are combined in search ranking. This new stage works with Lucidworks AI to ensure that result collapsing and final ranking use a unified scoring approach, leading to more precise and consistent search results—especially in ecommerce, where selecting the most relevant SKU for a product is critical. Key benefits include:
  • Smarter relevance ranking: Combines semantic and lexical signals earlier for more accurate ordering.
  • Optimized result collapsing: Ensures the best representative item is selected before final ranking.
  • Broad applicability: Enhances search performance in ecommerce and other scenarios that use result collapsing.
The Neural Hybrid Query stage differs from the Hybrid Query stage in a few ways. The new stage contains the following new fields:
  • Lexical Query Squash Factor lets you input a value that squashes the lexical query scores from 0..inf to 0..1. This setting helps prevent the lexical query from dominating the final score.
  • Compute Vector Similarity for Lexical-Only Matches computes vector similarity scores for documents found in lexical search results, but not in the initial vector search results. This setting can rescue orphaned nodes by finding docs which match lexically but are not in the vector results, and compute the vector similarity score for those.
Neural Hybrid Query stage This update provides a more intelligent hybrid search experience, improving relevance across a range of search applications.

Support for Kubernetes 1.31

Managed Fusion 5.9.10 introduces support for Kubernetes 1.31, bringing enhanced security, improved resource management, and better networking reliability. This update strengthens container security, improves how custom resources are managed and filtered, and enhances the reliability of kubectl operations like exec and port-forward, especially in complex network environments. By upgrading to Managed Fusion 5.9.10, you can take full advantage of Kubernetes 1.31’s advancements for stronger security, streamlined resource handling, and improved system stability.

Expanded support for read-only file system

Managed Fusion 5.9.11 completes support for the read-only root file system feature across all Managed Fusion services, strengthening protection against unauthorized changes. Read-only mode is enabled by default for some Managed Fusion services. See Read-only root file system for a list of services that support it or have it enabled by default.

Faster, more efficient data processing with Spark 3.4.1

Managed Fusion 5.9.10 upgrades Apache Spark to 3.4.1, bringing faster query execution, improved data transformation efficiency, and greater stability for distributed workloads. This enhancement optimizes indexing, refines SQL query handling, and ensures smoother analytics workflows, enabling you to process large-scale data with greater speed and precision. For more details, see the Spark 3.4.1 release notes. The Apache Spark 3.4.1 upgrade impacts jobs that use Python 3.7 behavior or compatibility, which may have automatically updated to Python 3.10.x and no longer function correctly. Update your code to ensure compatibility with Python 3.10.x and then test your Spark jobs in a staging environment before deploying to production.

Enhanced security and stability

Managed Fusion 5.9.10 introduces a new wave of security enhancements, ensuring a more resilient and up-to-date platform. This release includes critical updates across core services, including admin, frameworks, apps manager, classic connectors, and query indexing, reinforcing protection across the stack. Additionally, we’ve updated the bitnami-shell base image and upgraded key-tools to v3.0.2, further strengthening security and compliance. These enhancements help maintain a robust and secure Managed Fusion environment, keeping your data and infrastructure protected while optimizing performance for mission-critical workloads.

Apps Manager API

The new Apps Manager API gives information about your Fusion license, entitlements, and usage.
LucidAcademyLucidworks offers free training to help you get started.The Quick Learning for Apps Manager API focuses on the purpose and functions of the Apps Manager API:
Apps Manager APIPlay Button
Visit the LucidAcademy to see the full training catalog.

Bug fixes

  • Solr-exporter pods no longer get stuck in an ImagePullBackOff state, ensuring they pull the correct image and start reliably.
  • The job-launcher and job-rest-server services now start correctly in SSL mode, resolving an issue where missing dependencies caused failures during initialization.
  • Managed Fusion now returns all matching search rules and rewrites in Commerce Studio instead of just the first ten, ensuring complete rule retrieval and better compatibility between the two systems.
  • Prometheus stage execution histograms and counters now include stage labels, making it easier to interpret stage metrics.
  • Resolved an issue in Managed Fusion 5.9.4 where v2 connectors failed to start in certain self-hosted EKS environments, preventing timeouts and ensuring successful job execution.
  • Increased the request buffer size in lwai-gateway from 250 KB to 5 MB, allowing large messages to be processed without failures.

Known issues

  • Saving large pipelines during high traffic may trigger service instability. In some environments, saving large query pipelines while handling high traffic loads can cause the Query service to crash with OOM errors due to thread contention.
    Managed Fusion 5.9.14 resolves this issue. If you’re impacted and not yet on this version, contact Lucidworks Support for mitigation options.

Deprecations

For full details on deprecations, see Deprecations and Removals.
  • Managed Fusion has deprecated the Webapps service.
    In previous versions of Managed Fusion, you could use this service to deploy an App Studio WAR file into Managed Fusion.
  • MLeap support has been deprecated. MLeap was used for machine learning tasks in Fusion, including SpaCy and SparkNLP deployments and certain ML models. Instead, refer to the Develop and Deploy a Machine Learning Model guide.
This tutorial walks you through deploying your own model to Fusion with Seldon Core.

Prerequisites

  • A Fusion instance with an app and indexed data
  • An understanding of Python and the ability to write Python code
  • Docker installed locally, plus a private or public Docker repository
  • Seldon-core installed locally: pip install seldon-core
  • Code editor; you can use any editor, but Visual Studio Code is used in the example
  • Model: paraphrase-multilingual-MiniLM-L12-v2 from Hugging Face
  • Docker image: example_sbert_model

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

The examples in this section use the following models:
  1. Docker command:
     docker run -p 127.0.0.1:9000:9000 <your-docker-image>
    
  2. Curl to hit Docker:
     curl -X POST -H 'Content-Type: application/json' -d '{"data": { "ndarray": ["Sentence to test"], "names":["text"]} }' https://localhost:9000/api/v1.0/predictions
    
  3. Curl model in Fusion:
     curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://<your-fusion>.lucidworks.com:6764/api/ai/ml-models/<your-model>/prediction
    
  4. See all your deployed models:
     curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    

Download the model

This tutorial uses the paraphrase-multilingual-MiniLM-L12-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Packaging a Python model for Seldon Core using Docker in the Seldon Core documentation.
class MyModel(object):
    """
    Model template. You can load your model parameters in __init__ from a
    location accessible at runtime
    """

    def __init__(self):
        """
        Add any initialization parameters. These will be passed at runtime
        from the graph definition parameters defined in your seldondeployment
        kubernetes resource manifest.
        """
        print("Initializing")

    def predict(self,X,features_names,**kwargs):
        """
        Return a prediction.

        Parameters
        ----------
        X : array-like
        feature_names : array of feature names (optional)
        """
        print("Predict called - will run identity function")
        return X

    def  class_names(self):
        return ["X_name"]
A real instance of this class with the Paraphrase Multilingual MiniLM L12 v2 model is as follows:
import logging
import os

from transformers import AutoTokenizer, AutoModel
from torch.nn import functional as F
from typing import Iterable
import numpy as np
import torch

log = logging.getLogger()

class mini():
    def __init__(self):
        self.tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
        self.model= AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')

    #Mean Pooling
    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] #First element of model_output contains all token embeddings
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

    def predict(self, X:np.ndarray, names=None, **kwargs):
        #   In Fusion there are several variables passed in the numpy array with the Milvus Query stage,
        #   Encode to Milvus index stage, and Vectorize Seldon index and query stage:
        #   [pipeline, bool, and text]. Text is what variable will be encoded, so that is what will be set to 'text'
        #   When using the Machine Learning stage, the input map keys should match what what is in this file.

        model_input = dict(zip(names, X))
        text = model_input["text"]

        with torch.inference_mode(): # Allows torch to run more quickly
          # Tokenize sentences
          encoded_input = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt')
          log.debug('encoded input',str(encoded_input))
          model_output = self.model(**encoded_input)
          log.debug('model output',str(model_output))

          # Perform pooling. In this case, max pooling.
          sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
          # Normalize embeddings, because Fusion likes it that way.
          sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=-1)
          # Fixing the shape of the emebbedings to match (1, 384).
          final = [sentence_embeddings.squeeze().cpu().detach().numpy().tolist()]
        return final

    def class_names(self) -> Iterable[str]:
        return ["vector"]
In the above code, an additional function has been added in the class; this is completely fine to do. Logging has also been added for debugging purposes.Two functions are non-negotiable:
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • predict: The predict function processes the field or query that Fusion passes to the model.
    The predict function must be able to handle any text processing needed for the model to accept input invoked in its model.evaluate(), model.predict(), or equivalent function to get the expected model result.
    If the output needs additional manipulation, that should be done before the result is returned.
    For embedding models the return value must have the shape of (1, DIM), where DIM (dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Milvus or Solr.
Use the exact name of the class when naming this file.
For the example, above the Python file is named mini.py and the class name is mini().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim for seldon-core
FROM python:3.9-slim
# Whatever directory(folder)the python file for your python class, Dockerfile, and
# requirements.txt is in should be copied then denoted as the work directory.
COPY . /app
WORKDIR /app

# The requirements file for the Docker container
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# GRPC - Allows Fusion to do a Remote Procedure Call
EXPOSE 5000

# Define environment variable for seldon-core
# !!!MODEL_NAME must be the EXACT same as the python file & python class name!!!
ENV MODEL_NAME mini
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0

# Changing active directory folder (same one as above on lines 5 & 6) to default user, required for Fusion
RUN chown -R 8888 /app

# Command to wrap python class with seldon-core to allow it to be usable in Fusion
CMD ["sh", "-c", "seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE"]

# You can use the following if You need shell features like environment variable expansion or
# You need to use shell constructs like pipes, redirects, etc.
# See https://docs.docker.com/reference/dockerfile/#cmd for more details.
# CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model.
For the Paraphrase Multilingual MiniLM L12 v2 model, the requirements are as follows:
seldon-core
torch
transformers
numpy
In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.An easy way to populate the requirements is by using in the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt
If you use pip freeze, you must manually add seldon-core to the requirements file because it is not invoked in the Python file but is required for containerization.

Build and push the Docker image

After creating the <your_model>.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the commands below in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/example_sbert_model:0.14; docker push jstrmec/example_sbert_model:0.14
This repository is public and you can visit it here: example_sbert_model

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Seldon Core Model Deployment.
  3. Fill in each of the text fields: Create a Seldon Core model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Seldon Core. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model replicasThe number of load-balanced replicas of the model to deploy; specify multiple replicas for a higher-volume intake.
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image with an optional tag. If no tag is given, latest is used.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
    Output columnsA list of column names that the model’s predict method returns.
  4. Click Save, then Run and Start. Start a Seldon Core model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, it can be utilized in either index or query pipelines, depending on the model’s purpose. In this case the model is a word vectorizer or semantic vector search implementation, so both pipelines must invoke the model.

Apply an API key to the deployment

These steps are only needed if your model utilizes any kind of secret, such as an API key. If not, skip this section and proceed to the next.
  1. Create and modify a <seldon_model_name>_sdep.yaml file.
    In the first line, kubectl get sdep gets the details for the currently running Seldon Deployment job and saves those details to a YAML file. kubectl apply -f open_sdep.yaml adds the key to the Seldon Deployment job the next time it launches.
    kubectl get sdep <seldon_model_name> -o yaml > <seldon_model_name>_sdep.yaml
    # Modify <seldon_model_name>_sdep.yaml to add
           - env:
             - name: API_KEY
               value: "your-api-key-here"
    kubectl apply -f <seldon_model_name>_sdep.yaml
    
  2. Delete sdep before redeploying the model. The currently running Seldon Deployment job does not have the key applied to it. Delete it before redeploying and the new job will have the key.
    kubectl delete sdep <seldon_model_name>
    
  3. Lastly, you can encode into Milvus.

Create a Milvus collection

  1. In Fusion, navigate to Collections > Jobs.
  2. Click the Add+ Button and select Create Collections in Milvus.
    This job creates a collection in Milvus for storing the vectors sent to it. The job is needed because a collection does not automatically spawn at indexing or query time if it does not already exist.
  3. Name the job and the collection.
  4. Click Add on the right side of the job panel.
    The key to creating the collection is the Dimension text field; this must exactly match the shape value your output prediction has.
    In our example the shape is (1,384), so 384 will be in the collections Dimension field: Create a Milvus collection The Metric field should typically be left at the default of Inner Product, but this also depends on use case and model type.
  5. Click Save, then Run and Start.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Encode to Milvus.
  3. In the new stage, fill in these fields:
    • The name of your model
    • The output name you have for your model job
    • The field you’d like to encode
    • The collection name
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Milvus Query.
  3. Fill in the configuration fields, then save the stage.
  4. Add a Milvus Ensemble Query stage.
    This stage is necessary to have the Milvus collection scores taken into account in ranking and to weight multiple collections. The Milvus Results Context Key from the Milvus Query Stage is used in this stage to preform math on the Milvus result scores. One (1) is a typical multiplier for the Milvus results but any number can be used.
  5. Save the stage and then run a query by typing a search term.
  6. To verify the Milvus results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Seldon Core model to Fusion and deployed it.

Removals

For full details on removals, see Deprecations and Removals.

Bitnami removal

By August 28, 2025, Fusion’s Helm chart will reference internally built open-source images instead of Bitnami images due to changes in how they host images.

Forked Apache Tika Parser removal

The Forked Apache Tika parser stage has been removed. Use asynchronous Tika parsing instead.

Platform Support and Component Versions

Kubernetes platform support

Lucidworks has tested and validated support for the following Kubernetes platform and versions:
  • Google Kubernetes Engine (GKE): 1.28, 1.29, 1.30, 1.31
For more information on Kubernetes version support, see the Kubernetes support policy.

Component versions

The following table details the versions of key components that may be critical to deployments and upgrades.
ComponentVersion
Solrfusion-solr 5.9.10 (based on Solr 9.6.1)
ZooKeeper3.9.1
Spark3.4.1
Ingress ControllersNginx, Ambassador (Envoy), GKE Ingress Controller Istio not supported.
More information about support dates can be found at Lucidworks Fusion Product Lifecycle.
I