Skip to main content
Originally released on April 16, 2025 and re-released on May 14, 2025, this maintenance release provides support for chunking in Lucidworks AI, model hosting with Ray, and important security upgrades and bug fixes.
If you were upgraded to Managed Fusion 5.9.12 and rolled back to Managed Fusion 5.9.9, you will be upgraded to the re-released Managed Fusion 5.9.12.
This ensures you benefit from critical stability, security, and compatibility improvements included in the re-released version.
For supported Kubernetes versions and key component versions, see Platform support and component versions.

What’s new

Improved relevance in large documents with chunking

Fusion 5.9.12 introduces document chunking, a major advancement for search and generative AI quality and performance. Chunking breaks large documents into smaller, meaningful segments—called chunks—that are stored and indexed using Solr’s block join capabilities. Each chunk captures either lexical content (for keyword-based search) or semantic vectors (for neural search), enabling Fusion to retrieve the most relevant part of a document, rather than treating the document as a single block. This improves:
  • Search relevance: Users get results that point to the most relevant sections within large documents, not just documents that match overall.
  • Neural search precision: Vector chunks improve hybrid scoring by aligning semantic relevance with specific lexical content.
  • Scalability and maintainability: Updates or deletions are applied at the chunk level, ensuring consistency and avoiding stale or orphaned content.
  • Faceted search and UX: Results can be grouped and ranked more accurately, especially in use cases where dense documents contain multiple topics.
Document chunking is particularly valuable in knowledge management and technical content domains, where retrieving the right paragraph can be more important than retrieving the right document.
  • A new LWAI Chunker index pipeline stage uses one of the available chunking strategies (chunkers) for the specified LW AI model to provide optimized storage and retrieval. The chunkers asynchronously split the provided text in various ways such as by sentence, new line, semantics, and regular expression (regex) syntax.
  • The Chunking Neural Hybrid Query pipeline stage now detects chunked documents and retrieves the most relevant lexical and vector segments for hybrid search.
  • Updates and deletions now ensure consistent chunk synchronization to prevent orphaned data.
  • This feature includes a new Lucidworks AI Async Chunking API.
Click Get Started below to see how to enable chunking in Fusion:
Contact your Lucidworks account manager to confirm that your license includes this feature.

Model hosting with Ray

Managed Fusion 5.9.12 introduces support for model hosting with Ray, replacing the previous Seldon-based approach. Ray offers a more scalable and efficient architecture for serving machine learning models, with native support for distributed inference, autoscaling, and streamlined deployment. This transition simplifies Managed Fusion’s AI infrastructure, enhances performance, and aligns with modern MLOps practices to make deploying and managing models faster, more reliable, and easier to monitor. For more information, see Develop and deploy a machine learning model with Ray.
This tutorial walks you through deploying your own model to Fusion with Ray.
This feature is only available in Fusion 5.9.x for versions 5.9.12 and later.

Prerequisites

  • A Fusion instance with an app and indexed data.
  • An understanding of Python and the ability to write Python code.
  • Docker installed locally, plus a private or public Docker repository.
  • Ray installed locally: pip install ray[serve] using the version of ray[serve] found in the release notes for your version of Managed Fusion.
  • Code editor; you can use any editor, but Visual Studio Code is used in this example.
  • Model: intfloat/e5-small-v2
  • Docker image: e5-small-v2-ray

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
  • If you previously deployed a model with Seldon, you can deploy the same model with Ray after making a few changes to your Docker image as explained in this topic. To avoid conflicts, deploy the model with a different name. When you have verified that the Ray model is working after deployment with Ray, you can delete the Seldon model using the Delete Seldon Core Model Deployment job.
  • If you run into an issue with the model not deploying and you’re using the ‘real’ example, there is a very good chance you haven’t allocated enough memory or CPU in your job spec or in the Ray-Argo config. It’s easy to increase the resources. To edit the ConfigMap, run kubectl edit configmap argo-deploy-ray-model-workflow -n <namespace> and then find the ray-head container in the artisanal escaped YAML and change the memory limit. Exercise caution when editing because it can break the YAML. Just delete and replace a single character at a time without changing any formatting.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

  1. Docker command:
    docker run -p 127.0.0.1:9000:9000 DOCKER_IMAGE
    
  2. Curl to hit Docker:
    curl -i -X POST http://127.0.0.1:8000 -H 'Content-Type: application/json' -d '{"text": "The quick brown fox jumps over the lazy dog."}'
    
  3. Curl model in Fusion:
    curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://FUSION_HOST.com:6764/api/ai/ml-models/MODEL_NAME/prediction
    
  4. See all your deployed models:
    curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    
  5. Check the Ray UI to see Replica State, Resources, and Logs.
    If you are getting an internal model error, the best way to see what is going on is to query via port-forwarding the model.
    The MODEL_DEPLOYMENT in the command below can be found with kubectl get svc -n NAMESPACE. It will have the same name as set in the model name in the Create Ray Model Deployment job.
    kubectl -n NAMESPACE port-forward svc/MODEL_DEPLOYMENT-head-svc 8000:8000
    
    Once port-forwarding is successful, you can use the below cURL command to see the issue. At that point your worker logs should show helpful error messages.
    curl --location 'http://127.0.0.1:8000/' \
    --header 'charset: utf-8' \
    --header 'Content-Type: application/json' \
    --data '{"text": "i love fusion"}'
    

Download the model

This tutorial uses the e5-small-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Getting Started in the Ray Serve documentation.
from ray import serve
from starlette.requests import Request

# These defaults are for the ray serve deployment
# when running simply from docker. The 'Create Ray Model Deployment'
# job can override these replicas and resources if needed.
@serve.deployment(num_replicas=1, ray_actor_options={"num_cpus": 1})
class Deployment(object):
    def __init__(self):
        """
        Add any initialization parameters. Generally this is where you would load
        your model. This method will be called once when the deployment is created.
        """
        print("Initializing")
        self.model = load_model() #faux code

    # This can be named as any method which takes a dictionary as input and returns a dictionary
    # as output. In this example, we are using the encode method to encode the
    # input text into a vector.
    def encode(self, input_dict: Dict[str, Any]) -> Dict[str, Any]:
        """
        This method will be called when the deployment is queried. It will receive
        the input data and should return the output data.
        """
        text = input_dict["text"]
        embeddings = self.model.encode #faux code
        return { "vector": embeddings } # To use the 'Ray / Seldon Vectorize Field' stage, the output key should be `vector`, if using the 'Machine Learning' stage you must ensure the output key matches the output key in the 'Machine Learning' stage

    async def __call__(self, http_request: Request) -> Dict[str, Any]:
        input_dict: Dict[str, Any] = await http_request.json()
        return self.encode(input_dict=input_dict) # This will be the function you defined above, in this case encode


app = Deployment.bind()

A real instance of this class with the e5-small-v2 model is as follows:
This code pulls from Hugging Face. To have the model load in the image without pulling from Hugging Face or other external sources, download the model weights into a folder name and change the model name to the folder name preceded by ./.
import json
import sys
from time import time
from typing import Any, Dict

import torch
import torch.nn.functional as F
from ray import serve
from starlette.requests import Request
from starlette.responses import JSONResponse
from torch import Tensor
from transformers import AutoModel, AutoTokenizer

HUB_MODEL_NAME = "intfloat/e5-small-v2"


@serve.deployment(num_replicas=1, ray_actor_options={"num_cpus": 1})
class Deployment(object):
    def __init__(self):
        from loguru import logger

        self.logger = logger
        # Initializing logger
        self.logger.remove()
        self.logger.add(sys.stdout, level="INFO", serialize=False, colorize=True)

        # Initializing model
        self.logger.info("Loading model...")
        self.tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL_NAME)
        self.model = AutoModel.from_pretrained(HUB_MODEL_NAME)
        self.model.eval()
        self.logger.info("Model initialization finished!")

    def encode(self, input_dict: Dict[str, Any]) -> Dict[str, Any]:
        _start_time = time()

        # Extracting text from input
        text = input_dict["text"]

        # Tokenization
        tokenized_texts = self.tokenizer(
            text,
            max_length=512,
            padding=True,
            truncation=True,
            return_tensors="pt",
        )

        # Encoding
        with torch.inference_mode():
            # Forward pass of the model
            outputs = self.model(**tokenized_texts)

            # Average pooling the last hidden states
            embeddings = self.average_pool(
                outputs.last_hidden_state, tokenized_texts["attention_mask"]
            )

            # Normalizing embeddings
            embeddings = F.normalize(embeddings, p=2, dim=1)

            # Converting into output format
            output_dict = {"vector": embeddings.squeeze().tolist()}

        prediction_time = (time() - _start_time) * 1000
        self.logger.info(f"Time taken to make a prediction: {prediction_time:.0f}ms")
        return output_dict

    async def __call__(self, http_request: Request) -> Dict[str, Any]:
        try:
            input_dict: Dict[str, Any] = await http_request.json()
        except UnicodeDecodeError:
            body_bytes = await http_request.body()
            try:
                decoded = body_bytes.decode("utf-8", errors="replace")
                input_dict = json.loads(decoded)
            except json.JSONDecodeError:
                return JSONResponse({"error": "Invalid JSON"}, status_code=400)
        return self.encode(input_dict=input_dict)

    @staticmethod
    def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
        last_hidden = last_hidden_states.masked_fill(
            ~attention_mask[..., None].bool(), 0.0
        )
        return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]


app = Deployment.bind()
In the preceding code, logging has been added for debugging purposes.The preceding code example contains the following functions:
  • __call__: This function is non-negotiable.
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • encode: The encode function is where the field or query that is passed to the model from Fusion is processed.
    Alternatively, you can process it all in the __call__ function, but it is cleaner not to.
    The encode function can handle any text processing needed for the model to accept input invoked in its model.predict() or equivalent function which gets the expected model result.
If the output needs additional manipulation, that should be done before the result is returned. For embedding models, the return value must have the shape of (1, DIM), where DIM (vector dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Ray.
The class name must be Deployment() and the name of this file must be deployment.py.
In the preceding example, the Python file is named deployment.py and the class name is Deployment().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim
FROM python:3.10-slim

# Install dependencies
RUN apt-get update && apt-get install -y wget

# Create working app directory
RUN mkdir -p /app
WORKDIR /app

# Copy the requirements file and install the dependencies
COPY requirements.txt /app
RUN pip install -r requirements.txt --no-cache-dir

# Copy source code
COPY deployment.py /app

# Expose serving port for HTTP communication with Fusion
EXPOSE 8000

# The end of the command follows module:application and the below value should be set in the RAY DEPLOYMENT IMPORT PATH field in 'Create Ray Model Deployment' job
CMD exec serve run deployment:app

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model. For the e5-small-v2 model, the requirements are as follows:
torch -f https://download.pytorch.org/whl/torch_stable.html # Make sure that we download CPU version of PyTorch
transformers
loguru
ray[serve]==2.42.1
Any recent ray[serve] version should work, but the tested value and known supported version is 2.42.1. In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.To populate the requirements, use the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt

Build and push the Docker image

After creating the MODEL_NAME.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the following commands in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/e5-small-v2-ray:0.1
docker push jstrmec/e5-small-v2-ray:0.1
This repository is public and you can visit it here: e5-small-v2-ray

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Ray Model Deployment.
  3. Fill in each of the text fields: Create a Ray model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Ray. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model min replicasThe minimum number of load-balanced replicas of the model to deploy.
    Model max replicasThe maximum number of load-balanced replicas of the model to deploy. Specify multiple replicas for a higher-volume intake.
    Model CPU limitThe number of CPUs to allocate to a single model replica.
    Model memory limitThe maximum amount of memory to allocate to a single model replica.
    Ray Deployment Import PathThe path to your top-level Ray Serve deployment (or the same path passed to serve run). For example, deployment:app
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image. For example, e5-small-v2-ray:0.1.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
  4. Click Advanced to view and configure advanced details:
    ParameterDescription
    Additional parameters.This section lets you enter parameter name:parameter value options to be injected into the training JSON map at runtime. The values are inserted as they are entered, so you must surround string values with ". This is the sparkConfig field in the configuration file.
    Write Options.This section lets you enter parameter name:parameter value options to use when writing output to Solr or other sources. This is the writeOptions field in the configuration file.
    Read Options.This section lets you enter parameter name:parameter value options to use when reading input from Solr or other sources. This is the readOptions field in the configuration file.
  5. Click Save, then Run and Start. Start a Ray model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, you can use it in the Machine Learning or Ray / Seldon Vectorize index and query stages.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Machine Learning.
  3. In the new stage, fill in these fields:
    • The model ID
    • The model input
    • The model output
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Machine Learning
  3. In the new stage, fill in these fields:
    • The model ID
    • The model input
    • The model output
  4. Save the stage and then run a query by typing a search term.
  5. To verify the Ray results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Ray model to Fusion and deployed it.
If you previously deployed a model with Seldon, you can deploy the same model with Ray.
Just follow the instructions in Develop and deploy a machine learning model with Ray, and deploy the model with a different name to avoid conflicts. When you have verified that the Ray model is working after deployment with Ray, you can delete the Seldon model using the Delete Seldon Core Model Deployment job.
This tutorial walks you through deploying your own model to Fusion with Ray.
This feature is only available in Fusion 5.9.x for versions 5.9.12 and later.

Prerequisites

  • A Fusion instance with an app and indexed data.
  • An understanding of Python and the ability to write Python code.
  • Docker installed locally, plus a private or public Docker repository.
  • Ray installed locally: pip install ray[serve] using the version of ray[serve] found in the release notes for your version of Managed Fusion.
  • Code editor; you can use any editor, but Visual Studio Code is used in this example.
  • Model: intfloat/e5-small-v2
  • Docker image: e5-small-v2-ray

Tips

  • Always test your Python code locally before uploading to Docker and then Fusion. This simplifies troubleshooting significantly.
  • Once you’ve created your Docker you can also test locally by doing docker run with a specified port, like 9000, which you can then curl to confirm functionality in Fusion. See the testing example below.
  • If you previously deployed a model with Seldon, you can deploy the same model with Ray after making a few changes to your Docker image as explained in this topic. To avoid conflicts, deploy the model with a different name. When you have verified that the Ray model is working after deployment with Ray, you can delete the Seldon model using the Delete Seldon Core Model Deployment job.
  • If you run into an issue with the model not deploying and you’re using the ‘real’ example, there is a very good chance you haven’t allocated enough memory or CPU in your job spec or in the Ray-Argo config. It’s easy to increase the resources. To edit the ConfigMap, run kubectl edit configmap argo-deploy-ray-model-workflow -n <namespace> and then find the ray-head container in the artisanal escaped YAML and change the memory limit. Exercise caution when editing because it can break the YAML. Just delete and replace a single character at a time without changing any formatting.
LucidAcademyLucidworks offers free training to help you get started.The Course for Intro to Machine Learning in Fusion focuses on using machine learning to infer the goals of customers and users in order to deliver a more sophisticated search experience:
Intro to Machine Learning in FusionPlay Button
Visit the LucidAcademy to see the full training catalog.

Local testing example

  1. Docker command:
    docker run -p 127.0.0.1:9000:9000 DOCKER_IMAGE
    
  2. Curl to hit Docker:
    curl -i -X POST http://127.0.0.1:8000 -H 'Content-Type: application/json' -d '{"text": "The quick brown fox jumps over the lazy dog."}'
    
  3. Curl model in Fusion:
    curl -u $FUSION_USER:$FUSION_PASSWORD -X POST -H 'Content-Type: application/json' -d '{"text": "i love fusion"}' https://FUSION_HOST.com:6764/api/ai/ml-models/MODEL_NAME/prediction
    
  4. See all your deployed models:
    curl -u USERNAME:PASSWORD http://FUSION_HOST:FUSION_PORT/api/ai/ml-models
    
  5. Check the Ray UI to see Replica State, Resources, and Logs.
    If you are getting an internal model error, the best way to see what is going on is to query via port-forwarding the model.
    The MODEL_DEPLOYMENT in the command below can be found with kubectl get svc -n NAMESPACE. It will have the same name as set in the model name in the Create Ray Model Deployment job.
    kubectl -n NAMESPACE port-forward svc/MODEL_DEPLOYMENT-head-svc 8000:8000
    
    Once port-forwarding is successful, you can use the below cURL command to see the issue. At that point your worker logs should show helpful error messages.
    curl --location 'http://127.0.0.1:8000/' \
    --header 'charset: utf-8' \
    --header 'Content-Type: application/json' \
    --data '{"text": "i love fusion"}'
    

Download the model

This tutorial uses the e5-small-v2 model from Hugging Face, but any pre-trained model from https://huggingface.co will work with this tutorial.If you want to use your own model instead, you can do so, but your model must have been trained and then saved though a function similar to the PyTorch’s torch.save(model, PATH) function. See Saving and Loading Models in the PyTorch documentation.

Format a Python class

The next step is to format a Python class which will be invoked by Fusion to get the results from your model. The skeleton below represents the format that you should follow. See also Getting Started in the Ray Serve documentation.
from ray import serve
from starlette.requests import Request

# These defaults are for the ray serve deployment
# when running simply from docker. The 'Create Ray Model Deployment'
# job can override these replicas and resources if needed.
@serve.deployment(num_replicas=1, ray_actor_options={"num_cpus": 1})
class Deployment(object):
    def __init__(self):
        """
        Add any initialization parameters. Generally this is where you would load
        your model. This method will be called once when the deployment is created.
        """
        print("Initializing")
        self.model = load_model() #faux code

    # This can be named as any method which takes a dictionary as input and returns a dictionary
    # as output. In this example, we are using the encode method to encode the
    # input text into a vector.
    def encode(self, input_dict: Dict[str, Any]) -> Dict[str, Any]:
        """
        This method will be called when the deployment is queried. It will receive
        the input data and should return the output data.
        """
        text = input_dict["text"]
        embeddings = self.model.encode #faux code
        return { "vector": embeddings } # To use the 'Ray / Seldon Vectorize Field' stage, the output key should be `vector`, if using the 'Machine Learning' stage you must ensure the output key matches the output key in the 'Machine Learning' stage

    async def __call__(self, http_request: Request) -> Dict[str, Any]:
        input_dict: Dict[str, Any] = await http_request.json()
        return self.encode(input_dict=input_dict) # This will be the function you defined above, in this case encode


app = Deployment.bind()

A real instance of this class with the e5-small-v2 model is as follows:
This code pulls from Hugging Face. To have the model load in the image without pulling from Hugging Face or other external sources, download the model weights into a folder name and change the model name to the folder name preceded by ./.
import json
import sys
from time import time
from typing import Any, Dict

import torch
import torch.nn.functional as F
from ray import serve
from starlette.requests import Request
from starlette.responses import JSONResponse
from torch import Tensor
from transformers import AutoModel, AutoTokenizer

HUB_MODEL_NAME = "intfloat/e5-small-v2"


@serve.deployment(num_replicas=1, ray_actor_options={"num_cpus": 1})
class Deployment(object):
    def __init__(self):
        from loguru import logger

        self.logger = logger
        # Initializing logger
        self.logger.remove()
        self.logger.add(sys.stdout, level="INFO", serialize=False, colorize=True)

        # Initializing model
        self.logger.info("Loading model...")
        self.tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL_NAME)
        self.model = AutoModel.from_pretrained(HUB_MODEL_NAME)
        self.model.eval()
        self.logger.info("Model initialization finished!")

    def encode(self, input_dict: Dict[str, Any]) -> Dict[str, Any]:
        _start_time = time()

        # Extracting text from input
        text = input_dict["text"]

        # Tokenization
        tokenized_texts = self.tokenizer(
            text,
            max_length=512,
            padding=True,
            truncation=True,
            return_tensors="pt",
        )

        # Encoding
        with torch.inference_mode():
            # Forward pass of the model
            outputs = self.model(**tokenized_texts)

            # Average pooling the last hidden states
            embeddings = self.average_pool(
                outputs.last_hidden_state, tokenized_texts["attention_mask"]
            )

            # Normalizing embeddings
            embeddings = F.normalize(embeddings, p=2, dim=1)

            # Converting into output format
            output_dict = {"vector": embeddings.squeeze().tolist()}

        prediction_time = (time() - _start_time) * 1000
        self.logger.info(f"Time taken to make a prediction: {prediction_time:.0f}ms")
        return output_dict

    async def __call__(self, http_request: Request) -> Dict[str, Any]:
        try:
            input_dict: Dict[str, Any] = await http_request.json()
        except UnicodeDecodeError:
            body_bytes = await http_request.body()
            try:
                decoded = body_bytes.decode("utf-8", errors="replace")
                input_dict = json.loads(decoded)
            except json.JSONDecodeError:
                return JSONResponse({"error": "Invalid JSON"}, status_code=400)
        return self.encode(input_dict=input_dict)

    @staticmethod
    def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
        last_hidden = last_hidden_states.masked_fill(
            ~attention_mask[..., None].bool(), 0.0
        )
        return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]


app = Deployment.bind()
In the preceding code, logging has been added for debugging purposes.The preceding code example contains the following functions:
  • __call__: This function is non-negotiable.
  • init: The init function is where models, tokenizers, vectorizers, and the like should be set to self for invoking.
    It is recommended that you include your model’s trained parameters directly into the Docker container rather than reaching out to external storage inside init.
  • encode: The encode function is where the field or query that is passed to the model from Fusion is processed.
    Alternatively, you can process it all in the __call__ function, but it is cleaner not to.
    The encode function can handle any text processing needed for the model to accept input invoked in its model.predict() or equivalent function which gets the expected model result.
If the output needs additional manipulation, that should be done before the result is returned. For embedding models, the return value must have the shape of (1, DIM), where DIM (vector dimension) is a consistent integer, to enable Fusion to handle the vector encoding into Ray.
Use the exact name of the class when naming this file.
In the preceding example, the Python file is named deployment.py and the class name is Deployment().

Create a Dockerfile

The next step is to create a Dockerfile. The Dockerfile should follow this general outline; read the comments for additional details:
#It is important that python version is 3.x-slim
FROM python:3.10-slim

# Install dependencies
RUN apt-get update && apt-get install -y wget

# Create working app directory
RUN mkdir -p /app
WORKDIR /app

# Copy the requirements file and install the dependencies
COPY requirements.txt /app
RUN pip install -r requirements.txt --no-cache-dir

# Copy source code
COPY deployment.py /app

# Expose serving port for HTTP communication with Fusion
EXPOSE 8000

# The end of the command follows module:application and the below value should be set in the RAY DEPLOYMENT IMPORT PATH field in 'Create Ray Model Deployment' job
CMD exec serve run deployment:app

Create a requirements file

The requirements.txt file is a list of installs for the Dockerfile to run to ensure the Docker container has the right resources to run the model. For the e5-small-v2 model, the requirements are as follows:
torch -f https://download.pytorch.org/whl/torch_stable.html # Make sure that we download CPU version of PyTorch
transformers
loguru
ray[serve]==2.42.1
Any recent ray[serve] version should work, but the tested value and known supported version is 2.42.1. In general, if an item was used in an import statement in your Python file, it should be included in the requirements file.To populate the requirements, use the following command in the terminal, inside the directory that contains your code:
pip freeze > requirements.txt

Build and push the Docker image

After creating the MODEL_NAME.py, Dockerfile, and requirements.txt files, you need to run a few Docker commands. Run the following commands in order:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t [DOCKERHUB-USERNAME]/[REPOSITORY]:[VERSION-TAG]
docker push [DOCKERHUB USERNAME]/[REPOSITORY]:[VERSION-TAG]
Using the example model, the terminal commands would be as follows:
DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build . -t jstrmec/e5-small-v2-ray:0.1
docker push jstrmec/e5-small-v2-ray:0.1
This repository is public and you can visit it here: e5-small-v2-ray

Deploy the model in Fusion

Now you can go to Fusion to deploy your model.
  1. In Fusion, navigate to Collections > Jobs.
  2. Add a job by clicking the Add+ Button and selecting Create Ray Model Deployment.
  3. Fill in each of the text fields: Create a Ray model deployment job
    ParameterDescription
    Job IDA string used by the Fusion API to reference the job after its creation.
    Model nameA name for the deployed model. This is used to generate the deployment name in Ray. It is also the name that you reference as a model-id when making predictions with the ML Service.
    Model min replicasThe minimum number of load-balanced replicas of the model to deploy.
    Model max replicasThe maximum number of load-balanced replicas of the model to deploy. Specify multiple replicas for a higher-volume intake.
    Model CPU limitThe number of CPUs to allocate to a single model replica.
    Model memory limitThe maximum amount of memory to allocate to a single model replica.
    Ray Deployment Import PathThe path to your top-level Ray Serve deployment (or the same path passed to serve run). For example, deployment:app
    Docker RepositoryThe public or private repository where the Docker image is located. If you’re using Docker Hub, fill in the Docker Hub username here.
    Image nameThe name of the image. For example, e5-small-v2-ray:0.1.
    Kubernetes secretIf you’re using a private repository, supply the name of the Kubernetes secret used for access.
  4. Click Advanced to view and configure advanced details:
    ParameterDescription
    Additional parameters.This section lets you enter parameter name:parameter value options to be injected into the training JSON map at runtime. The values are inserted as they are entered, so you must surround string values with ". This is the sparkConfig field in the configuration file.
    Write Options.This section lets you enter parameter name:parameter value options to use when writing output to Solr or other sources. This is the writeOptions field in the configuration file.
    Read Options.This section lets you enter parameter name:parameter value options to use when reading input from Solr or other sources. This is the readOptions field in the configuration file.
  5. Click Save, then Run and Start. Start a Ray model deployment job When the job finishes successfully, you can proceed to the next section.
Now that the model is in Fusion, you can use it in the Machine Learning or Ray / Seldon Vectorize index and query stages.

Configure the Fusion pipelines

Your real-world pipeline configuration depends on your use case and model, but for our example we will configure the index pipeline and then the query pipeline.Configure the index pipeline
  1. Create a new index pipeline or load an existing one for editing.
  2. Click Add a Stage and then Machine Learning.
  3. In the new stage, fill in these fields:
    • The model ID
    • The model input
    • The model output
  4. Save the stage in the pipeline and index your data with it.
Configure the query pipeline
  1. Create a new query pipeline or load an existing one for editing.
  2. Click Add a Stage and then Machine Learning
  3. In the new stage, fill in these fields:
    • The model ID
    • The model input
    • The model output
  4. Save the stage and then run a query by typing a search term.
  5. To verify the Ray results are correct, use the Compare+ button to see another pipeline without the model implementation and compare the number of results.
You have now successfully uploaded a Ray model to Fusion and deployed it.

AI and machine learning features

Managed Fusion’s machine learning services now run on Python 3.10 and Java 11, bringing improved performance, security, and compatibility with the latest libraries. These upgrades enhance model execution speed, memory efficiency, and long-term support, ensuring Managed Fusion’s machine learning capabilities remain optimized for your evolving AI workloads. No configuration changes are required to take advantage of these improvements.

Improved prefiltering support for Neural Hybrid Search (NHS)

Managed Fusion 5.9.12 introduces a more robust prefiltering strategy for Neural Hybrid Search, including support for chunked document queries. The new approach ensures that security filters and other constraints are applied consistently and efficiently to KNN queries—improving precision, maintaining performance, and avoiding previous compatibility issues with Solr query syntax.

Bug fixes

  • Fixed an issue where ConfigSync removed job schedules during upgrade.
    In some cases, upgrading to Managed Fusion 5.9.11 could result in ConfigSync removing job schedules from the cluster without consistently reapplying them, leading to missing or disabled schedules post-upgrade.
    Managed Fusion 5.9.12 resolves this issue, ensuring job schedules remain intact across upgrades and ConfigSync behaves predictably in all environments.
  • Fixed incorrect image repository for Solr in Helm charts.
    In Managed Fusion 5.9.11, the Helm chart specified an internal Lucidworks Artifactory repository for the Solr image.
    This has been corrected in 5.9.12 so the Solr image repository is either empty or points to lucidworks/fusion-solr, aligning with other components and simplifying deployment for external environments.
  • Added support for configuring the Spark version used by Fusion.
    Managed Fusion 5.9.12 now lets your choose whether to use Spark 3.4.1 (introduced in Fusion 5.9.10) or the earlier Spark 3.2.2 version used in Fusion 5.9.9.
    This flexibility helps maintain compatibility with legacy Python (3.7.3) and Scala environments, especially for apps that depend on specific Spark runtime behaviors.
    When Spark 3.4.1 is enabled, custom Python jobs require Python 3.10.
    Contact Lucidworks to request Spark 3.2.2 for your Managed Fusion deployment.
  • Fixed incorrect started-by values for datasource jobs in the job history.
    In previous versions, datasource jobs started from the Managed Fusion UI were incorrectly shown as started by default-subject instead of the actual user.
    Managed Fusion now correctly records and displays the initiating user in the job history, restoring accurate audit information for datasource operations.
  • Fixed a schema loading issue that prevented older apps from working with the Schema API. Managed Fusion now correctly handles both managed-schema and managed-schema.xml files when reading Solr config sets, ensuring backward compatibility with apps created before the move to template-based config sets.
    This prevents Schema API failures caused by unhandled exceptions during schema file lookup.
  • Scheduled jobs now correctly trigger dependent jobs.
    In Managed Fusion 5.9.12, we fixed an issue that prevented scheduled jobs from triggering other jobs based on their success or failure status.
    This includes jobs configured to run “on_success_or_failure” or using the “Start + Interval” option.
    With this fix, dependent jobs now execute as expected, restoring reliable job chaining and scheduling workflows.
  • Fixed an issue that prevented updates to existing scheduled job triggers in the Schedulers view.
    This bug was caused by inconsistencies in how the API returned UTC timestamps, particularly for times after 12:00 UTC. The Admin UI now correctly detects changes and allows updates to trigger times without requiring the entry to be deleted and recreated.
  • Improved reliability of scheduled jobs in the job-config service.
    This release resolves several issues that could interfere with job scheduling and history visibility in Managed Fusion environments:
    • Stronger recovery from infrastructure interruptions: Ensures the scheduler recovers if all job-config pods briefly lose connection to ZooKeeper.
    • Correct permission handling: Fixes cases where jobs could not be scheduled due to mismatches between user permissions and service account behavior.
    • Restored visibility of system job history: Fixes an issue where system jobs such as delete-old-system-logs and delete-old-job-history were missing from the UI despite running normally in the background.
    • Reliable schedule creation in all app states: Fixes an issue where adding a new schedule from the Run dialog appeared to succeed but did not persist the configuration in some apps.
  • Fixed a simulation failure in the Index Workbench when configuring new datasources.
    Managed Fusion 5.9.12 resolves an issue where Index Workbench failed to simulate results after configuring a new datasource, displaying the error “Failed to simulate results from a working pipeline.” This fix restores full functionality to the Index Workbench, allowing you to preview and configure indexing workflows in one place without switching between multiple views.
  • Fixed a bug that caused aborted jobs to appear twice in the job history.
    Previously, when you manually aborted a job, it was recorded twice in the job history.
    This duplication has been resolved, and each aborted job now appears only once in the history log.
  • Fixed an issue that prevented segment-based rule filtering from working correctly in Commerce Studio. Managed Fusion now honors the lw.rules.target_segment parameter, ensuring only matching rules are triggered and improving rule targeting and safety.
  • This release eliminates extra warning messages in the API Gateway related to undetermined service ports. Previously, the gateway logged repeated warnings about missing primary-port-name labels, even though this did not impact functionality. This fix reduces unnecessary log noise and improves the clarity of your logs.

Known issues

  • UI may incorrectly report job-config as down
    In Managed Fusion 5.9.12 through 5.9.13, the job-config service may be flagged as “down” in the UI even when running normally.
    This display issue is fixed in Managed Fusion 5.9.14.
  • Jobs and V2 datasources may fail when Managed Fusion collections are remapped to different Solr collections.
    In Managed Fusion versions 5.9.12 through 5.9.13, strict validation in the job-config service causes “Collection not found” errors when jobs or V2 datasources target Managed Fusion collections that point to differently named Solr collections.
    This issue is fixed in Managed Fusion 5.9.14.
    As a workaround, use V1 datasources or avoid using REST call jobs on remapped collections.
  • Saving large pipelines during high traffic may trigger service instability.
    In some environments, saving large query pipelines while handling high traffic loads can cause the Query service to crash with OOM errors due to thread contention.
    Managed Fusion 5.9.14 resolves this issue.
    If you’re impacted and not yet on this version, contact Lucidworks Support for mitigation options.
  • Jobs for Web V2 connectors may fail to start after an earlier failure.
    If a Web V2 connector job is interrupted - such as by scaling down the connector pod - the system may enter a corrupted state.
    Even after clearing and recreating the datasource, new jobs may fail with the error The state should never be null.
    This issue is fixed in Fusion 5.9.13.
  • The fusion-spark-3.2.2 image in Fusion 5.9.12 may fail to refresh Kubernetes tokens correctly.
    In Managed Fusion 5.9.12 environments, Spark jobs that rely on token-based authentication can fail due to a Fabric8 client bug in the 3.2.2 Spark image.
    This may impact the stability or execution of long-running jobs. This issue is fixed in Fusion 5.9.13.
  • The job-config service may incorrectly report a DOWN status via /actuator/health even when running normally.
    When TLS is enabled and ZooKeeper is unavailable for an extended period, the job-config service may resume normal operation but continue to report DOWN on the actuator health endpoint, despite readiness and liveness probes reporting UP.
    This issue is fixed in Fusion 5.9.13.
  • Web connector may fail to index due to corrupted job state
    Managed Fusion running 5.9.12 may fail to index with the Webv2 connector (v2.0.1) due to a corrupted job state in the connectors-backend service.
    Affected jobs log the error The state should never be null, and common remediation steps like deleting the datasource or reinstalling the connector plugin may not resolve the issue.
    The issue is fixed in Managed Fusion 5.9.13.
  • Saving new datasource schedules may fail silently.
    In some Managed Fusion 5.9.12 environments, clicking Save when adding a schedule from the datasource “Run” dialog does not persist the schedule or show an error message, particularly in apps created before the upgrade.
    As a workaround, use a new app or manually verify that the job configuration was saved.
    This issue is fixed in Managed Fusion 5.9.13.

Removals

For full details on removals, see Deprecations and Removals.
  • Bitnami removal
    By August 28, 2025, Fusion’s Helm chart will reference internally built open-source images instead of Bitnami images due to changes in how they host images.
  • The Tika Server Parser is removed in this release.
    Use the Tika Asynchronous Parser instead. Asynchronous Tika parsing performs parsing in the background. This allows Managed Fusion to continue indexing documents while the parser is processing others, resulting in improved indexing performance for large numbers of documents.
  • MLeap is removed from the ml-model service. MLeap was deprecated in Managed Fusion 5.2.0 and was no longer used by Managed Fusion.

Platform Support and Component Versions

Kubernetes platform support

Lucidworks has tested and validated support for the following Kubernetes platform and versions:
  • Google Kubernetes Engine (GKE): 1.29, 1.30, 1.31
For more information on Kubernetes version support, see the Kubernetes support policy.

Component versions

The following table details the versions of key components that may be critical to deployments and upgrades.
ComponentVersion
Solrfusion-solr 5.9.12
(based on Solr 9.6.1)
ZooKeeper3.9.1
Spark3.4.1
Ingress ControllersNginx, Ambassador (Envoy), GKE Ingress Controller
Rayray[serve] 2.42.1
More information about support dates can be found at Lucidworks Fusion Product Lifecycle.
I