-
POST request - submits a prediction task for a specific
useCase
andmodelId
. The API responds with the following information: -
predictionId
. A unique UUID for the submitted prediction task that can be used later to retrieve the results. -
status
. The current state of the prediction task.- GET request - uses the
predictionId
you submit from a previously-submitted POST request and returns the results associated with that previous request.
- GET request - uses the
USE_CASE
and MODEL_ID
fields in the /async-prediction
use case POST requests.
The generic path for the Async Prediction API is /ai/async-prediction/USE_CASE/MODEL_ID
.
For detailed API specifications in Swagger/OpenAPI format, see Platform APIs.
Prerequisites
To use this API, you need:- The unique
APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs. - A bearer token generated with a scope value of
machinelearning.predict
. For more information, see Authentication API. - Other required fields specified in each individual use case.
Common POST request parameters and fields
Some parameters in the/ai/async-prediction/USE_CASE/MODEL_ID
POST request are common to all of the generative AI (Gen-AI) use cases, such as the modelConfig
parameter.
Also referred to as hyperparameters, these fields set certain controls on the response.
Refer to the API spec for more information.
Async prediction use case by modelid
The/ai/async-prediction/USE_CASE/MODEL_ID
request submits a prediction task for a specific useCase
and modelId
. Upon submission, a successful response includes a unique predictionId
and a status
. The predictionId
can be used later in the GET request to retrieve the results.
Unique fields and values in the request are described in each use case.