Product Selector

Fusion 5.9
    Fusion 5.9

    Async Prediction APILucidworks AI

    The Lucidworks AI Async Prediction API is used to send asynchronous API calls that run predictions on specific models.

    The Lucidworks AI Async Prediction API contains two requests:

    • POST request - submits a prediction task for a specific useCase and modelId. The API responds with the following information:

      • predictionId. A unique UUID for the submitted prediction task that can be used later to retrieve the results.

      • status. The current state of the prediction task.

    • GET request - uses the predictionId you submit from a previously-submitted POST request and returns the results associated with that previous request.

    Lucidworks deployed the mistral-7b-instruct and llama-3-8b-instruct models. The Lucidworks AI Use Case API returns a list of all supported models. For more information about supported models, see Generative AI models.

    You can enter the values returned in the Lucidworks AI Use Case API for the USE_CASE and MODEL_ID fields in the /async-prediction use case POST requests.

    The generic path for the Async Prediction API is /ai/async-prediction/USE_CASE/MODEL_ID.

    Prerequisites

    To use this API, you need:

    • The unique APPLICATION_ID for your Lucidworks AI application. For more information, see credentials to use APIs.

    • A bearer token generated with a scope value of machinelearning.predict. For more information, see Authentication API.

    • Other required fields specified in each individual use case.

    Common POST request parameters and fields

    modelConfig

    Some parameters of the /ai/async-prediction/USE_CASE/MODEL_ID POST request are common to all of the generative AI (GenAI) use cases, including the modelConfig parameter. If you do not enter values, the following defaults are used.

    "modelConfig":{
      "temperature": 0.7,
      "topP": 1.0,
      "presencePenalty": 0.0,
      "frequencyPenalty": 0.0,
      "maxTokens": 256
    }

    Also referred to as hyperparameters, these fields set certain controls on the response of a LLM:

    Field Description

    temperature

    A sampling temperature between 0 and 2. A higher sampling temperature such as 0.8, results in more random (creative) output. A lower value such as 0.2 results in more focused (conservative) output. A lower value does not guarantee the model returns the same response for the same input.

    topP

    A floating-point number between 0 and 1 that controls the cumulative probability of the top tokens to consider, known as the randomness of the LLM’s response. This parameter is also referred to as top probability. Set topP to 1 to consider all tokens. A higher value specifies a higher probability threshold and selects tokens whose cumulative probability is greater than the threshold. The higher the value, the more diverse the output.

    presencePenalty

    A floating-point number between -2.0 and 2.0 that penalizes new tokens based on whether they have already appeared in the text. This increases the model’s use of diverse tokens. A value greater than zero (0) encourages the model to use new tokens. A value less than zero (0) encourages the model to repeat existing tokens.

    frequencyPenalty

    A floating-point number between -2.0 and 2.0 that penalizes new tokens based on their frequency in the generated text. A value greater than zero (0) encourages the model to use new tokens. A value less than zero (0) encourages the model to repeat existing tokens.

    maxTokens

    The maximum number of tokens to generate per output sequence. The value is different for each model. Review individual model specifications when the value exceeds 2048.

    apiKey

    The optional parameter is only required when the specified model is used for prediction. This secret value is specified in the external model. For:

    • OpenAI models, "apiKey" is the value in the model’s "[OPENAI_API_KEY]" field. For more information, see Authentication API keys.

    • Azure OpenAI models, "apiKey" is the value generated by Azure in either the model’s "[KEY1 or KEY2]" field. For requirements to use Azure models, see Generative AI models.

    • Google VertexAI models, "apiKey" is the value in the model’s

      "[BASE64_ENCODED_GOOGLE_SERVICE_ACCOUNT_KEY]" field. For more information, see Create and delete Google service account keys.

    The parameter (for OpenAI, Azure OpenAI, or Google VertexAI models) is only available for the following use cases:

    • Pass-through

    • RAG

    • Standalone query rewriter

    • Summarization

    • Keyword extraction

    • NER

    azureDeployment

    The optional "azureDeployment": "[DEPLOYMENT_NAME]" parameter is the deployment name of the Azure OpenAI model and is only required when a deployed Azure OpenAI model is used for prediction.

    azureEndpoint

    The optional "azureEndpoint": "[ENDPOINT]" parameter is the URL endpoint of the deployed Azure OpenAI model and is only required when a deployed Azure OpenAI model is used for prediction.

    googleProjectId

    The optional "googleProjectId": "[GOOGLE_PROJECT_ID]" parameter is only required when a Google VertexAI model is used for prediction.

    googleRegion

    The optional "googleRegion": "[GOOGLE_PROJECT_REGION_OF_MODEL_ACCESS]" parameter is only required when a Google VertexAI model is used for prediction. The possible region values are:

    • us-central1

    • us-west4

    • northamerica-northeast1

    • us-east4

    • us-west1

    • asia-northeast3

    • asia-southeast1

    • asia-northeast

    Async prediction use case by modelid

    The /ai/async-prediction/USE_CASE/MODEL_ID request submits a prediction task for a specific useCase and modelId. Upon submission, a successful response includes a unique predictionId and a status. The predictionId can be used later in the GET request to retrieve the results.

    Unique fields and values in the request are described in each use case.

    POST response parameters and fields

    The response to the POST /ai/async-prediction/USE_CASE/MODEL_ID requests are as follows:

    Field Description

    predictionId

    The universal unique identifier (UUID) returned in the POST request. This UUID is required in the GET request to retrieve results. For example, fd110486-f168-47c0-a419-1518a4840589.

    status

    The current status of the prediction. Values are:

    • SUBMITTED - The POST request was successful and the response has returned the predictionId and status. The predictionId is used in the GET request.

    • ERROR - An error was generated when the request was sent.

    • READY - The results associated with the predictionId are available and ready to be retrieved.

    Example POST request

    curl --request POST \
      --url https://APPLICATION_ID.applications.lucidworks.com/ai/async-prediction/USE_CASE/MODEL_ID \
      --header 'Accept: application/json' \
      --header 'Content-Type: application/json' \
      --data '{
      "batch": [
        {
          "text": "Content for the model to analyze."
        }
      ],
      "useCaseConfig": [
        {
          "useSystemPrompt": true
        }
      ],
      "modelConfig": [
        {
          "temperature": 0.8,
          "topP": 1,
          "presencePenalty": 2,
          "frequencyPenalty": 1,
          "maxTokens": 1
        }
      ]
    }'

    The following is an example of a successful response:

    {
    	"predictionId": "fd110486-f168-47c0-a419-1518a4840589",
    	"status": "SUBMITTED"
    }

    The following is an example of an error response:

    {
    	"predictionId": "fd110486-f168-47c0-a419-1518a4840589",
    	"status": "ERROR",
    	"message": "System prompt exceeded the maximum number of allowed input tokens: 81 vs -1091798"
    }

    Example GET request

    curl --request GET
    --url https://APPLICATION_ID.applications.lucidworks.com/ai/async-prediction/PREDICTION_ID
    --header 'Authorization: Bearer Auth '

    The response varies based on the specific use case and the fields included in the request.