Lucidworks AI Prediction API
APPLICATION_ID
for your Lucidworks AI application. For more information, see credentials to use APIs.machinelearning.predict
. For more information, see Authentication API.USE_CASE
and MODEL_ID
fields for the use case request. The path is: /ai/prediction/USE_CASE/MODEL_ID
. A list of supported models is returned in the Lucidworks AI Use Case API. For more information about supported models, see Generative AI models./ai/async-prediction/USE_CASE/MODEL_ID
request are common to all of the generative AI (GenAI) use cases, such as the modelConfig
parameter.
Also referred to as hyperparameters, these fields set certain controls on the response.
Refer to the API spec for more information.
pass-through
use case are unique to this use case, including values for the useCaseConfig
parameter.
Refer to the API spec for more information.
"useCaseConfig": "useSystemPrompt": boolean
This parameter can be used:
mistral-7b-instruct
and llama-3-8b-instruct
, generate more effective results when system prompts are included in the request."useSystemPrompt": true
, the LLM input is automatically wrapped into a model-specific prompt format with a generic system prompt before passing it to the model or third-party API."useSystemPrompt": false
, the batch.text
value serves as the prompt for the model. The LLM input must accommodate model-specific requirements because the input is passed as is.mistral-7b-instruct
model must be specific to Mistral:llama-3-8b-instruct
model must be specific to Llama:useSystemPrompt
example does not include modelConfig
parameters, but you can submit requests that include parameters described in Common parameters and fields.
"useCaseConfig": "dataType": "string"
This optional parameter enables model-specific handling in the Prediction API to help improve model accuracy. Use the most applicable fields based on available dataTypes and the dataType value that best aligns with the text sent to the Prediction API.
The values for dataType
in the Passthrough use case are:
"dataType": "text"
This value is equivalent to "useSystemPrompt": true
and is a pre-defined, generic prompt.
"dataType": "raw_prompt"
This value is equivalent to "useSystemPrompt": false
and is passed directly to the model or third-party API.
"dataType": "json_prompt"
This value follows the generics that allow three roles:
system
user
assistant
assistant
, it is used as a pre-fill for generation and is the first generated token the model uses. The pre-fill is prepended to the model output, which makes models less verbose and helps enforce specific outputs such as YAML.
json_prompt
information:
json_prompt
value and change the model name in the stage."dataType": "json_prompt"`` example does not include
modelConfig` parameters, but you can submit requests that include parameters described in Common parameters and fields.
passthrough
use case prompts before being sent to any generative AI (GenAI) model. You can use this information to help debug and ensure input to the generative AI (GenAI) model is valid, and within the model’s processing limits.
For more information, see Prompting Preview API.