The Retrieval augmented generation (RAG) use case of the LWAI Prediction API uses candidate documents that are inserted into a LLM’s context to ground the generated response to those documents instead of generating an answer from details stored in the LLM’s trained weights. This helps prevent frequency of LLM hallucinative responses. This type of search adds guardrails so the LLM can search private data collections. The rag search can perform queries against external documents passed in as part of the request. This use case can be used:
  • To generate answers based on the context of the responses collected (corpus)
  • To generate a response based on the context from responses to a previous request
For detailed API specifications in Swagger/OpenAPI format, see Platform APIs.

Prerequisites

To use this API, you need:
  • The unique APPLICATION_ID for your Lucidworks AI application. For more information, see credentials to use APIs.
  • A bearer token generated with a scope value of machinelearning.predict. For more information, see Authentication API.
  • The USE_CASE and MODEL_ID fields for the use case request. The path is: /ai/prediction/USE_CASE/MODEL_ID. A list of supported models is returned in the Lucidworks AI Use Case API. For more information about supported models, see Generative AI models.

Common parameters and fields

Some parameters in the /ai/async-prediction/USE_CASE/MODEL_ID request are common to all of the generative AI (GenAI) use cases, such as the modelConfig parameter. Also referred to as hyperparameters, these fields set certain controls on the response. Refer to the API spec for more information.

Unique values for the external documents RAG use case

Some parameter values available in the external documents RAG use case are unique to this use case, including values for the documents and useCaseConfig parameters. Refer to the API spec for more information.

Example request

The following is an example request. This example does not include:
curl --request POST \
  --url https://APPLICATION_ID.applications.lucidworks.com/ai/prediction/rag/MODEL_ID \
  --header 'Authorization: Bearer ACCESS_TOKEN' \
  --header 'Content-type: application/json' \
  --data '{
  "batch": [
    {
      "text": "Why did I go to Germany?",
      "documents": [{
        "body": "I'm off to Germany to go to the Oktoberfest!",
        "source": "http://example.com/112",
        "title": "Off to Germany!",
        "date": 1104537600
        }
      ]
    }
  ],
}'
The previous response includes the:
  • Generated answer
  • SOURCES line of text that contains the URL of the documents used to generate the answer
  • Metadata about the response:
    • memoryUuid that can be used to retrieve the LLM’s chat history
    • Count of tokens used to complete the query
The following example shows a request with the useCaseConfig parameters. If the initial request text had been unrelated, such as “How is the weather?” instead of “Why did I go to Germany?”, the parameters ensure that a reasonable answer is still generated.
curl --request POST \
  --url https://APPLICATION_ID.applications.lucidworks.com/ai/prediction/rag/MODEL_ID \
  --header 'Authorization: Bearer ACCESS_TOKEN' \
  --header 'Content-type: application/json' \
  --data '{
  "batch": [
    {
      "text": "Why did I go to Germany?",
      "documents": [{
        "body": "I'm off to Germany to go to the Oktoberfest!",
        "source": "http://example.com/112",
        "title": "Off to Germany!",
        "date": 1104537600
        }
      ],
      "useCaseConfig": {
        "extractRelevantContent": true,
        "answerNotFoundMessage": "No answer found."
      }
    }
  ],
}'

Unique values for the chat history RAG use case

Some parameter values available in the chat history RAG use case are unique to this use case, including values for the documents and useCaseConfig parameters. Refer to the API spec for more information.

Example request using chat history

When using the RAG search, the LLM service stores the query and its response in a cache. In addition to the response, it also returns a UUID value in the memoryUuid field. If the UUID is passed back in a subsequent request, the LLM uses the cached query and response as part of its context. This lets the LLM be used as a chatbot, where previous queries and responses are used to generate the next response. The following is an example request. This example does not include:
curl --request POST \
  --url  https://APPLICATION_ID.applications.lucidworks.com/ai/prediction/rag/MODEL_ID \
  --header 'Authorization: Bearer ACCESS_TOKEN' \
  --header 'Content-type: application/json' \
  --data '{
  "batch": [
    {
    "text": "What is RAG?",
    "documents": [{
      "body":"Retrieval Augmented Generation, known as RAG, a framework promising to optimize generative AI and ensure its responses are up-to-date, relevant to the prompt, and most important",
      "source":"http://rag.com/115",
      "title":"What is Retrieval Augmented Generation",
      "date":"1104537600"
      }]
    }
  ],
    "useCaseConfig": {
      "memoryUuid": "27a887fe-3d7c-4ef0-9597-e2dfc054c20e"
    }
    }'