- English language model text encoder
- Multilingual language model text encoder
- Custom model
For detailed API specifications in Swagger/OpenAPI format, see Platform APIs.
Prerequisites
To use this API, you need:- The unique
APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs. - A bearer token generated with a scope value of
machinelearning.predict
. For more information, see Authentication API. - The
USE_CASE
andMODEL_ID
fields for the use case request. The path is:/ai/prediction/USE_CASE/MODEL_ID
. A list of supported models is returned in the Lucidworks AI Use Case API.
Unique values for the embeddings use cases
Some parameter values available in theembeddings
use case are unique to this use case, including values for the useCaseConfig
parameter.
Refer to the API spec for more information.
Vector quantization
Quantization is implemented by converting float vectors into integer vectors, allowing for byte vector search using 8-bit integers. Float vectors, while very precise, are often a bit of a burden to compute and store, especially as they grow in dimensionality. One solution to this issue is to convert the vector floats into integers after inference, making byte vectors which are lower consumers of memory space and faster to compute with minimal loss in accuracy or quality. Byte vectors are available through all of the Lucidworks LWAI hosted embedding models, including custom trained models. Vector quantization methods are implemented through themodelConfig
parameter, vectorQuantizationMethod
. The methods are named min-max
and max-scale
.
- The
min-max
method creates tensors of embeddings and converts them to uint8 by normalizing them to the range [0, 255]. - The
max-scale
method finds the maximum absolute value along each embedding, normalizes the embeddings by scaling them to a range of -127 to 127, and returns the quantized embeddings as an 8-bit integer tensor.
max-scale
method has no loss at the ten-thousandths place during evaluation against non-quantized vectors.
However, other methods lose precision when evaluated against non-quantized vectors, with min-max
losing the most precision.
Matryoshka vector dimension reduction
Vector dimension reduction is the process of making the default vector size of a model smaller. The purpose of this reduction is to lessen the burden of storing large vectors while still achieving the good quality of a larger model. The technique is called Matryoshka Representation Learning (MRL) and lets you reduce vector size while maintaining good quality. For information about the pre-trained embedding models that use the Matryoshka Representation Learning technique, see:You can reduce vectors for any model using the
modelConfig
dimReductionSize
, which allows any integer above 0, but less than or equal to the vector dimension of the model.English language model text encoder
The English language encoder takes in plain English text and returns a 768-dimensional vector encoding of that text. This model powers this semantic search. The API truncates incoming text to approximately 256 words before the model encodes it and returns a vector. An example usage pattern is to encode all the texts and descriptions in a website and then use this encoder on query text, supporting natural language queries such as “1990s children’s fiction”. Each API request includes one batch containing up to 32 text strings.Multilingual language model text encoder
The multilingual encoder takes in plain text and returns a 384-dimensional vector encoding of that text. The API truncates incoming text to approximately 256 words before the model encodes it and returns a vector. Each API request includes one batch containing up to 32 text strings. The text strings in a batch do not have to be in the same language. You can also use words from multiple languages with eachtext
value. Because long text strings are truncated to approximately 256 words, the order and length of the value affects the return results.
MODEL_ID
you enter in the custom model to return a prediction.
Verify information sent to embedding models
The Lucidworks AI Tokenization API returns Prediction APIembedding
use case tokens before being sent to any pre-trained embedding model or custom embedding model. You can use this information to help debug and ensure the input to the pre-trained or custom embedding model is valid, and within the model’s processing limits.
For more information, see Tokenization API.