The passthrough use case lets you use the service as a proxy to the large language model (LLM). The service sends text (no additional prompts or other information) to the LLM.
The POST request obtains and indexes prediction information related to the specified use case, and returns a unique predictionId
and status
of the request. The predictionId
can be used later in the GET request to retrieve the results.
The authentication and authorization access token.
application/json
"application/json"
Unique identifier for the model.
"6a092bd4-5098-466c-94aa-40bf6829430\""
OK
This is the response to the POST prediction request submitted for a specific useCase
and modelId
.