- Use cases are set in the
useCaseConfig
parameter. - Models are set in the
modelConfig
parameter.
Generative AI indexing enrichment
Generative AI processes indexed data and enriches it with AI-generated information to improve the search experience and enhances the quality of results. These diagrams display the process that enriches generative AI indexing for several functions and use cases.Generative AI prediction indexing enrichment

Generative AI vectorize indexing enrichment

Generative AI summarization indexing enrichment

Generative AI keyword extraction indexing enrichment

Generative AI named entity recognition (NER) indexing enrichment

Generative AI query enrichment
In addition to the rules you create and deploy, generative AI enhances queries by rewriting, determining similar context and terms, and interpreting the potential meaning of search entries. This diagram displays the query enrichment process.
Generative AI models
Lucidworks-hosted GenAI models
All of the models currently hosted by Lucidworks are open source. In models hosted by Lucidworks, the data is:- Contained within Lucidworks and is never exposed to third parties.
- Passed to the specified model and generates responses to the instance’s requests, but is not retained to train or be used by the model after the initial request.
Llama-3.1-8b-instruct
(llama-3-8b-instruct
)Llama-3.2-3b-instruct
(llama-3v2-3b-instruct
)Mistral-7B-instruct
(v0.2) (mistral-7b-instruct
)- nu-zero-ner. This model only supports the NER use case.
Model | Sync Length | Async Length |
---|---|---|
llama-3-8b-instruct | 32000 | 128000 |
llama-3v2-3b-instruct | 32000 | 64000 |
mistral | 8192 | 8192 |
nu-zero-ner | 384 | 384 |
OpenAI models
An API key is required in each OpenAI model request. There is no default key. The supported OpenAI models are:gpt-4
gpt-4o
gpt-4o-2024-05-13
gpt-4-0613
gpt-4-turbo
gpt-4-turbo-2024-04-09
gpt-4-turbo-preview
gpt-4-1106-preview
gpt-3.5-turbo
gpt-3.5-turbo-1106
gpt-3.5-turbo-0125
Azure OpenAI models
Deployed Azure OpenAI models are supported in the LWAI Prediction API and the Lucidworks AI Async Prediction API in the following use cases:- Pass-through
- Retrieval Augmented Generation (RAG)
- Standalone query rewriter
- Summarization
- Keyword extraction
- Named Entity Recognition (NER)
- A valid Azure subscription on Microsoft Azure.
- Deployed
Azure OpenAI
models you want to use. Lucidworks does not supportAzure AI Studio
. - The Azure
Deployment Name
for the model you want to use. Use this as the value of the Lucidworks AI API"modelConfig": "azureDeployment"
field. - The Azure
Key1
orKey2
for the model you want to use. Use either as the value of the Lucidworks AI API"modelConfig": "apiKey"
field. - The Azure
Endpoint
for the model you want to use. Use this as the value of the Lucidworks AI API"modelConfig": "azureEndpoint"
field. - The Lucidworks AI API value of
MODEL_ID
for Azure OpenAI isazure-openai
.
Google Vertex AI models
Lucidworks AI supports these Google Vertex AI models:gemini-2.5-pro
(based ongemini-2.5-pro-preview-03-25
)gemini-2.5-flash
(based ongemini-2.5-flash-preview-04-17
)gemini-2.0-flash
gemini-2.0-flash-lite
apiKey
, googleProjectId
, and googleRegion
.
There are no defaults for any of these fields.
The value for apiKey
is a base64-encoded Google Vertex AI service account key.
To learn how to create it, see Create a Google service account key.
Anthropic models
An API key is required in each OpenAI model request. There is no default key. The supported Anthropic models are:claude-sonnet-4-20250514
claude-3-7-sonnet-20250219
claude-3-5-sonnet-20241022
claude-3-5-haiku-20241022
Generative AI use cases
The GenAI use cases are used to run predictions from pre-trained models.The Prediction API also contains the embedding use case (that is not categorized as GenAI use cases).
- Pre-trained models for the LWAI Prediction API.
- Custom models for either the LWAI Prediction API or the Lucidworks AI Async Prediction API.
/ai/prediction/USE_CASE/MODEL_NAME
.
The generic path for the Async Prediction API is /ai/async-prediction/USE_CASE/MODEL_NAME
.
The GenAI use cases based on the generic path are as follows:
- Pass-through use case lets you use the Generative AI services as a proxy to the large language model (LLM). Use this use case when you want full control over the prompt sent to the GenAI model.
- Retrieval augmented generation (RAG) use case uses candidate documents inserted into a LLM’s context to ground the generated response to those documents to prevent frequency of LLM hallucinative responses.
- Standalone query rewriter use case rewrites the text in relation to information associated with the
memoryUuid
. This use case can be invoked during the RAG use case. - Summarization use case where the LLM ingests text and returns a summary of that text as a response.
- Keyword extraction use case where the LLM ingests text and returns a JSON response that lists keywords extracted from that text.
- Named Entity Recognition
ner
use case where the LLM ingests text and entities to extract and return a JSON response that contains a list of entities extracted from the text.