POST
/
ai
/
async-prediction
/
{USE_CASE}
/
{MODEL_ID}
Model predictions by use case
curl --request POST \
  --url https://application_id.applications.lucidworks.com/ai/async-prediction/{USE_CASE}/{MODEL_ID} \
  --header 'Content-Type: application/json' \
  --data '{
  "batch": [
    {
      "text": "The content the model analyzes."
    }
  ],
  "modelConfig": {
    "temperature": 0.8,
    "topP": 1,
    "topK": -1,
    "presencePenalty": 2,
    "frequencyPenalty": 1,
    "maxTokens": 1,
    "apiKey": "API key specific to use case and model",
    "azureDeployment": "DEPLOYMENT_NAME",
    "azureEndpoint": "https://azure.endpoint.com",
    "googleProjectId": "[GOOGLE_PROJECT_ID]",
    "googleRegion": "[GOOGLE_PROJECT_REGION_OF_MODEL_ACCESS]"
  }
}'
{
  "predictionId": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
  "status": "SUBMITTED"
}

Headers

Authorization: Bearer ACCESS_TOKEN
string

The authentication and authorization access token.

Content-Type
string

application/json

Path Parameters

USE_CASE
string
required

The name of the use case for the model.

MODEL_ID
string
required

The unique identifier for the model.

Body

application/json

Request information varies based on the use case in the request. See the specific use case for valid information for that use case.

Response

200
application/json

OK

This is the response to the POST prediction request submitted for a specific useCase and modelId.