For detailed API specifications in Swagger/OpenAPI format, see Platform APIs.
Prerequisites
To use this API, you need:- The unique
APPLICATION_IDfor your Lucidworks AI application. For more information, see credentials to use APIs. - A bearer token generated with a scope value of
machinelearning.predict. For more information, see Authentication API. - The
USE_CASEandMODEL_IDfields for the use case request. The path is:/ai/prediction/USE_CASE/MODEL_ID. A list of supported models is returned in the Lucidworks AI Use Case API. For more information about supported models, see Generative AI models.
Common parameters and fields
Some parameters in the/ai/async-prediction/USE_CASE/MODEL_ID request are common to all of the generative AI (Gen-AI) use cases, such as the modelConfig parameter.
Also referred to as hyperparameters, these fields set certain controls on the response.
Refer to the API spec for more information.
Unique values for the pass-through use case
Some parameter values available in thepass-through use case are unique to this use case, including values for the useCaseConfig parameter.
Refer to the API spec for more information.
Use System Prompt
"useCaseConfig": "useSystemPrompt": boolean
This parameter can be used:
- If custom prompts are needed, or if the prompt response format needs to be manipulated.
- But the prompt length may increase response time.
Some models, such as themistral-7b-instructandllama-3-8b-instruct, generate more effective results when system prompts are included in the request.
If"useSystemPrompt": true, the LLM input is automatically wrapped into a model-specific prompt format with a generic system prompt before passing it to the model or third-party API.
If"useSystemPrompt": false, thebatch.textvalue serves as the prompt for the model. The LLM input must accommodate model-specific requirements because the input is passed as is.
- The format for the
mistral-7b-instructmodel must be specific to Mistral:
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 - The format for the
llama-3-8b-instructmodel must be specific to Llama:
https://huggingface.co/blog/llama3#how-to-prompt-llama-3 - The text input for OpenAI models must be valid JSON to match the OpenAI API specification:
https://platform.openai.com/docs/api-reference/chat/create - The format for the Google Vertex AI models must adhere to the guidelines at:
https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini
useSystemPrompt example does not include modelConfig parameters, but you can submit requests that include parameters described in Common parameters and fields.
Data Type
"useCaseConfig": "dataType": "string"
This optional parameter enables model-specific handling in the Prediction API to help improve model accuracy. Use the most applicable fields based on available dataTypes and the dataType value that best aligns with the text sent to the Prediction API.
The values for dataType in the Passthrough use case are:
-
"dataType": "text"This value is equivalent to"useSystemPrompt": trueand is a pre-defined, generic prompt. -
"dataType": "raw_prompt"This value is equivalent to"useSystemPrompt": falseand is passed directly to the model or third-party API. -
"dataType": "json_prompt"This value follows the generics that allow three roles:-
system -
user- Only the last user message is truncated.
- If the API does not support system prompts, the user role is substituted for the system role.
-
assistant- If the last message role is
assistant, it is used as a pre-fill for generation and is the first generated token the model uses. The pre-fill is prepended to the model output, which makes models less verbose and helps enforce specific outputs such as YAML. - The Google Vertex AI does not support generation pre-fills, so an exception error is generated.
-
Additional
json_promptinformation:- Consecutive messages for the same role are merged.
- You can paste the information for a hosted model into the
json_promptvalue and change the model name in the stage.
- If the last message role is
-
"dataType": "json_prompt"`` example does not include modelConfig` parameters, but you can submit requests that include parameters described in Common parameters and fields.
Verify information sent to Gen-AI models
The Lucidworks AI Generative AI Prompting Preview API returns Prediction APIpassthrough use case prompts before being sent to any generative AI (Gen-AI) model. You can use this information to help debug and ensure input to the generative AI (Gen-AI) model is valid, and within the model’s processing limits.
For more information, see Prompting Preview API.