Async Prediction APILucidworks AI
The Lucidworks AI Async Prediction API is used to send asynchronous API calls that run predictions on specific models.
The Lucidworks AI Async Prediction API contains two requests:
-
POST request - submits a prediction task for a specific
useCase
andmodelId
. The API responds with the following information:-
predictionId
. A unique UUID for the submitted prediction task that can be used later to retrieve the results. -
status
. The current state of the prediction task.
-
-
GET request - uses the
predictionId
you submit from a previously-submitted POST request and returns the results associated with that previous request.
The LWAI Async Prediction API supports models hosted by Lucidworks and specific third-party models. The Lucidworks AI Use Case API returns a list of all supported models. For more information about supported models, see Generative AI models.
You can enter the values returned in the Lucidworks AI Use Case API for the USE_CASE
and MODEL_ID
fields in the /async-prediction
use case POST requests.
The generic path for the Async Prediction API is /ai/async-prediction/USE_CASE/MODEL_ID
.
To view the full configuration specification for an API, click the View API specification button. Alternatively, click here to open the API spec. |
Prerequisites
To use this API, you need:
-
The unique
APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs. -
A bearer token generated with a scope value of
machinelearning.predict
. For more information, see Authentication API. -
Other required fields specified in each individual use case.
Common POST request parameters and fields
Some parameters in the /ai/async-prediction/USE_CASE/MODEL_ID
POST request are common to all of the generative AI (GenAI) use cases, such as the modelConfig
parameter.
Also referred to as hyperparameters, these fields set certain controls on the response.
Refer to the API spec for more information.
Async prediction use case by modelid
The /ai/async-prediction/USE_CASE/MODEL_ID
request submits a prediction task for a specific useCase
and modelId
. Upon submission, a successful response includes a unique predictionId
and a status
. The predictionId
can be used later in the GET request to retrieve the results.
Unique fields and values in the request are described in each use case. |
Example POST request
curl --request POST \
--url https://APPLICATION_ID.applications.lucidworks.com/ai/async-prediction/USE_CASE/MODEL_ID \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ACCESS_TOKEN'
--data '{
"batch": [
{
"text": "Content for the model to analyze."
}
],
"useCaseConfig": [
{
"useSystemPrompt": true
}
],
"modelConfig": [
{
"temperature": 0.8,
"topP": 1,
"presencePenalty": 2,
"frequencyPenalty": 1,
"maxTokens": 1
}
]
}'
The following is an example of a successful response:
{
"predictionId": "fd110486-f168-47c0-a419-1518a4840589",
"status": "SUBMITTED"
}
The following is an example of an error response:
{
"predictionId": "fd110486-f168-47c0-a419-1518a4840589",
"status": "ERROR",
"message": "System prompt exceeded the maximum number of allowed input tokens: 81 vs -1091798"
}
Example GET request
curl --request GET
--url https://APPLICATION_ID.applications.lucidworks.com/ai/async-prediction/PREDICTION_ID
--header 'Authorization: Bearer Auth '
The response varies based on the specific use case and the fields included in the request.
Async Prediction API use cases
The use cases available in the Lucidworks AI Async Prediction API are detailed in the following topics: