Lucidworks AI Async Prediction API
useCase
and modelId
. The API responds with the following information:
predictionId
. A unique UUID for the submitted prediction task that can be used later to retrieve the results.status
. The current state of the prediction task.predictionId
you submit from a previously-submitted POST request and returns the results associated with that previous request.
APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs.machinelearning.predict
. For more information, see Authentication API.USE_CASE
and MODEL_ID
fields in the /async-prediction
for the POST request. The path is /ai/async-prediction/USE_CASE/MODEL_ID
. A list of supported modes is returned in the Lucidworks AI Use Case API. For more information about supported models, see Generative AI models./ai/async-prediction/USE_CASE/MODEL_ID
POST request are common to all of the generative AI (GenAI) use cases, such as the modelConfig
parameter.
Also referred to as hyperparameters, these fields set certain controls on the response.
Refer to the API spec for more information.
external documents RAG
use case are unique to this use case, including values for the documents
and useCaseConfig
parameters.
Refer to the API spec for more information.
modelConfig
parameters, but you can submit requests that include parameters described in Common parameters and fields.useCaseConfig
parameters, but you can submit requests that include parameters described in Unique values for the external documents RAG use case.SOURCES
line of text that contains the URL of the documents used to generate the answermemoryUuid
that can be used to retrieve the LLM’s chat historyuseCaseConfig
parameters in the request:
external documents RAG
use case are unique to this use case, including values for the documents
and useCaseConfig
parameters.
Refer to the API spec for more information.
memoryUuid
field. If the UUID is passed back in a subsequent request, the LLM uses the cached query and response as part of its context. This lets the LLM be used as a chatbot, where previous queries and responses are used to generate the next response.
The following is an example request. This example does not include:
modelConfig
parameters, but you can submit requests that include parameters described in the API spec.useCaseConfig
parameters, but you can submit requests that include parameters described in Unique values for the chat history RAG use case.