APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs.machinelearning.predict
. For more information, see Authentication API.USE_CASE
and MODEL_ID
fields for the use case request. The path is: /ai/prediction/USE_CASE/MODEL_ID
. A list of supported models is returned in the Lucidworks AI Use Case API.embeddings
use case are unique to this use case, including values for the useCaseConfig
parameter.
Refer to the API spec for more information.
modelConfig
parameter, vectorQuantizationMethod
. The methods are named min-max
and max-scale
.
min-max
method creates tensors of embeddings and converts them to uint8 by normalizing them to the range [0, 255].max-scale
method finds the maximum absolute value along each embedding, normalizes the embeddings by scaling them to a range of -127 to 127, and returns the quantized embeddings as an 8-bit integer tensor.max-scale
method has no loss at the ten-thousandths place during evaluation against non-quantized vectors.
However, other methods lose precision when evaluated against non-quantized vectors, with min-max
losing the most precision.
modelConfig
dimReductionSize
, which allows any integer above 0, but less than or equal to the vector dimension of the model.text
value. Because long text strings are truncated to approximately 256 words, the order and length of the value affects the return results.
MODEL_ID
you enter in the custom model to return a prediction.
embedding
use case tokens before being sent to any pre-trained embedding model or custom embedding model. You can use this information to help debug and ensure the input to the pre-trained or custom embedding model is valid, and within the model’s processing limits.
For more information, see Tokenization API.