Product Selector

Fusion 5.9
    Fusion 5.9

    Async Chunking APILucidworks AI

    The Lucidworks AI Async Chunking API asynchronously separates large pieces of text into smaller pieces, called chunks. The API then returns the chunks and their associated vectors. Currently, the maximum text size allowed for input is approximately 1 MB.

    Breaking text into chunks can produce a significant number of chunks, especially if there is overlap between chunks or small chunk sizes, so there are limits on how many chunks and vectors can be generated. These limits are based on factors such as the dimension size of the embedding model and whether vector quantization is used.

    The Async Chunking API contains two requests:

    • POST Request. This request is used to submit text for a chunking strategy and model. Upon submission, the API responds with the following information:

      • chunkingId that is a unique UUID for the submitted chunking task, and can be used later to retrieve the results.

      • status that indicates the current state of the chunking task.

    • GET Request. This request is used to retrieve the results of a previously-submitted chunking request. You must provide the unique chunkingId received from the POST request. The API then returns the results of the chunking request associated with that chunkingId.

    Chunking strategies (chunkers)

    There are five chunking strategies (chunkers) available in the Async Chunking API. Each chunker splits and processes submitted text differently.

    Prerequisites

    To use this API, you need:

    • The unique APPLICATION_ID for your Lucidworks AI application. For more information, see credentials to use APIs.

    • A bearer token generated with a scope value of machinelearning.predict. For more information, see Authentication API.

    • The CHUNKER and MODEL_ID fields for the use case request. The path is: /ai/async-chunking/CHUNKER/MODEL_ID. A list of supported models is returned in the Lucidworks AI Use Case API.

    Common parameters and fields

    modelConfig

    Some parameters of the /ai/async-chunking/CHUNKER/MODEL_ID request are common to all of the Async Chunking API requests, including the modelConfig parameter.

    For example:

    "modelConfig":{
      "vectorQuantizationMethod": "max-scale",
      "dimReductionSize":256
    }

    Also referred to as hyperparameters, these fields set certain controls on the response:

    Field Description

    vectorQuantizationMethod

    Vector quantization compresses data size, as well as reducing memory usage.

    The methods are:

    • min-max - Creates tensors of the text and converts it to uint8 by normalizing it to the range [0, 255].

    • max-scale - Finds the maximum absolute value for the encoded text, normalizes it by scaling the text to a range of -127 to 127, and then returns the quantized text as an 8-bit integer tensor.

    For more information, see vector quantization.

    dimReductionSize

    Used to reduce vector size while maintaining good quality. This field allows any integer above 0, but less than or equal to the vector dimension of the model.

    If you send a vector dimension larger than the model, a 400 Bad Request error is returned.

    Not every model is designed to support this parameter. In this scenario, a warning message is generated that indicates quality can decrease.

    Vector quantization

    To process large chunks of text efficiently, Lucidworks recommends you enter the appropriate value in the "modelConfig": "vectorQuantizationMethod" field to ensure that as much of the text as possible is chunked, even for large inputs.

    Quantized vectors are less resource intensive to store and compute, which decreases index and query processing time.

    In addition, due to their size, more quantized vectors can be used to reach the same amount of memory as typical vectors.

    For example, if the quantized vector sizes are [1,0,2], [2,3,1], [6,0,0], [0,0,2] and typical vectors are [0.012341,0.23434,0.01334], [0.5434,0.02134,0.05434], [0.76534,0.0953,0.1334], [0.398,0.38574,0.01384], and the amount of memory is 5MB:

    The quantized vector can have 5000 vectors that reach 5MB because they are smaller. But typical vectors can only have 500 because they have more numerical values to save in memory. In this example, the numbers 5MB, 5000, and 500 are random numbers.

    The following table specifies the number of chunks returned for a single request based on vector dimension and the setting of vector quantization.

    Vector Dimension Size Maximum Chunks Returned if Quantized Vector = true Maximum Chunks if Quantized Vector = false

    32

    40000

    11000

    64

    22500

    5800

    128

    12000

    3000

    256

    6500

    1500

    384

    4500

    1000

    512

    3250

    750

    768

    2250

    500

    1024

    1700

    380

    1536

    250

    250

    2048

    850

    190

    useCaseConfig

    The "useCaseConfig": "dataType": "string" parameter is common to all of the Async Chunking API chunkers in the /ai/async-chunking/CHUNKER/MODEL_ID request. If you do not enter the value, the default of query is used.

    This optional parameter enables model-specific handling in the Async Chunking API to help improve model accuracy. Use the most applicable fields based on available dataTypes and the dataType value that best aligns with the text sent to the Async Chunking API.

    The string values to use are:

    • "dataType": "query" for the query.

    • "dataType": "passage" for fields searched at query time.

    The syntax example is:

    "useCaseConfig":
      {
        "dataType": "query"
      }

    Unique parameters and fields

    chunkerConfig

    The parameters to configure each chunker are as follows:

    dynamic-newline chunker

    The dynamic-newline chunker splits the provided text on all new line characters. Then all of the split chunks under the maxChunkSize limit will be merged. This is the default chunker configuration if nothing is passed.

    • "chunkerConfig": "maxChunkSize" - This integer field defines the maximum token limit for a chunker. The default is 512 tokens, which matches the maximum context size of the Lucidworks-hosted embedding models.

      Example:

      "chunkerConfig": {
      "maxChunkSize": 512
          }

    dynamic-sentence chunker

    The dynamic-sentence chunker splits the provided text into sentences. Sentences are joined until they reach the maxChunkSize. If overlapSize is provided, adjacent chunks will both have overlapping sentences on the sides.

    Example:

    S1 S2 S3 — -- S3 S4 S5 — -- — -- S5 S6 S7

    This is the default chunker configuration if nothing is passed.

    • "chunkerConfig": "maxChunkSize" - This integer field defines the maximum token limit for a chunker. The default is 512 tokens, which matches the maximum context size of the Lucidworks-hosted embedding models.

      Example:

      "chunkerConfig": {
      "maxChunkSize": 512
          }

    • "chunkerConfig": "overlapSize" - This integer field sets the number of sentences that can overlap between consecutive chunks. The default is 1 sentence for most configurations.

      Example:

      "chunkerConfig": {
      "overlapSize": 1
          }

    regex-splitter chunker

    The regex-splitter chunker splits the submitted text based on the specified regex (regular expression), according to the conventions employed by the re python package. This is the default chunker configuration if nothing is passed. For more information about the re operations, see https://docs.python.org/3/library/re.html.

    • "chunkerConfig": "regex" - This field sets the regular expression used to split the provided text. For example, \\n.

      Example:

      "chunkerConfig": {
      "regex": "\\n"
          }

    semantic chunker

    The semantic chunker creates chunks based on semantic similarity.

    Using the model defined in the URL request, the semantic chunker splits text into sentences, encodes the sentences, and then compares the sentence to the building chunk to determine if they are similar enough to group together.

    After merging two semantically-similar sentences into a pre-chunk, the semantic chunker needs to encode it to get its vector to compare with the next sentence vector.

    This chunker is the slowest of all of the chunkers even if you set the approximate field to true.

    This is the default chunker configuration if nothing is passed.

    • "chunkerConfig": "maxChunkSize" - This integer field defines the maximum token limit for a chunker. The default is 512 tokens, which matches the maximum context size of the Lucidworks-hosted embedding models.

      Example:

      "chunkerConfig": {
      "maxChunkSize": 512
          }

    • "chunkerConfig": "overlapSize" - This integer field sets the number of sentences that can overlap between consecutive chunks. The default is 1 sentence for most configurations.

      Example:

      "chunkerConfig": {
      "overlapSize": 1
          }

    • "chunkerConfig": "cosineThreshold" - This decimal field controls how similar a sentence must be to a chunk (based on cosine similarity), in order for the sentence to be merged into the chunk. This value is a decimal between 0 and 1. The default threshold is 0.5.

      Example:

      "chunkerConfig": {
      "cosineThreshold": 0.5
          }

    • "chunkerConfig": "approximate" - If this boolean field is set to true, the semantic chunker does not encode the split text to get its vector to compare with the next sentence vector. This greatly increases processing time with no loss in the result quality. However, even with the ability to specify true in the approximate field, the semantic chunker is the slowest of all the chunkers.

      If this field is set to false, the semantic chunking is, on average, 5 times slower than when set to true, with very minimal or no precision increase.

      Example:

      "chunkerConfig": {
      "approximate": true
          }

    sentence chunker

    The sentence chunker splits text on sentences. This is the default chunker configuration if nothing is passed.

    • "chunkerConfig": "chunkSize" - This integer field sets the maximum number of sentences per chunk. The default is 5.

      Example:

      "chunkerConfig": {
      "chunkSize": 5
          }

    • "chunkerConfig": "overlapSize" - This integer field sets the number of sentences that can overlap between consecutive chunks. The default is 1 sentence for most configurations.

      Example:

      "chunkerConfig": {
      "overlapSize": 1
          }

    POST request

    The following is an example of the POST request used by every chunker. Fields and values unique to each chunker are detailed in Unique parameters and fields.

    curl --request POST \
      --url https://APPLICATION_ID.applications.lucidworks.com/ai/async-chunking/{CHUNKER}/{MODEL_ID} \
      --header 'Authorization: Bearer ACCESS_TOKEN' \
      --header 'Content-Type: application/json' \
      --data '{
        "batch": [
          {
            "text": "The itsy bitsy spider climbed up the waterspout.\nDown came the rain.\nAnd washed the spider out.\nOut came the sun.\nAnd dried up all the rain.\nAnd the itsy bitsy spider climbed up the spout again."
          }
        ],
        "useCaseConfig": {
          "dataType": "query"
        },
        "modelConfig": {
          "vectorQuantizationMethod": "max-scale"
        }
      }'

    Response statuses

    The following are the values of the status of a request:

    • SUBMITTED. The POST request was successful and the response has returned the chunkingId and status that is used by the GET request.

    • ERROR. An error was generated when the GET request was sent.

    • READY. The results associated with the chunkingId are available and ready to be retrieved.

    • RETRIEVED. The results associated with the chunkingId are returned successfully when the GET request was sent.

    Successful response for a POST request

    The following is an example of a successful response:

    {
        "chunkingId": "708df452-4ac9-4c2c-a8f1-8fcd7a9687f3",
        "status": "SUBMITTED"
    }

    Error response for a POST request

    The following is an example of an error response:

    {
    	"chunkingId": "899df453-4ac9-4c2c-c8f1-8fcd7a9687f3",
        "status": "ERROR",
    	"message": "Could not chunk the submitted text."
    }

    GET request

    To retrieve the chunked results, use the chunkingId from the POST response in a GET request.

    The following is an example of the GET request used by every chunker.

    curl --request GET \
      --url https://APPLICATION_ID.applications.lucidworks.com/ai/async-chunking/{CHUNKING_ID} \
      --header 'Authorization: Bearer ACCESS_TOKEN' \
      --header 'Content-type: application/json'

    Response when results not ready

    The following is an example if the results are not ready to be retrieved. The status is "SUBMITTED".

    {
        "chunkingId": "708df452-4ac9-4c2c-a8f1-8fcd7a9687f3",
        "status": "SUBMITTED"
    }

    Response when results are ready

    The following is an example of a successful response when the results are ready to be retrieved. The status is "READY".

    {
        "chunkingId": "708df452-4ac9-4c2c-a8f1-8fcd7a9687f3",
        "status": "READY",
        "chunkedData": [
            {
                "chunks": [
                    "The itsy bitsy spider climbed up the waterspout.",
                    "Down came the rain.",
                    "And washed the spider out.",
                    "Out came the sun.",
                    "And dried up all the rain.",
                    "And the itsy bitsy spider climbed up the spout again."
                ],
                "vectors": [
                    {
                        "tokensUsed": {
                            "inputTokens": 15
                        },
                        "vector": [
                            -21, -16, 21, -7, 0, -1, 24, 15, -14, -5, 2, -38, 4, 14, 4, 17, -2, 17, -26, 12, 26, -14, 10, -8, 7, 0, -7, 14, -14, -78, -7, -6, 13, -8, 14, 5, -8, 3, 18, 14, 3, 6, -11, -16, -21, -21, -6, -10, 25, -19, 5, 1, -7, 14, -8, 1, 32, -2, 4, 6, 31, 4, -73, 44, 32, 18, -24, -9, 16, 1, -4, 23, 15, 15, 19, -5, 5, 0, 0, 6, -20, -24, -18, 2, 2, -11, 15, -33, 20, 12, -1, -6, -10, -8, -2, 0, 1, 0, -15, 85, -25, 19, 21, 1, -6, -14, 0, -17, 0, 0, 11, -7, 28, -5, 2, 1, 0, 0, -30, 10, 2, -7, 23, -6, 8, -15, 34, 50, 20, -1, 20, -27, -1, -21, -3, 8, 4, 10, 9, -14, -22, -28, 2, -41, -2, 26, -17, 12, -11, 5, -2, 9, -9, -7, 8, 11, -9, 39, -11, -7, -4, -7, -11, 1, 33, -39, -23, 4, 7, -7, 6, 18, -32, 15, 13, -8, -15, -15, 2, 11, 8, -23, -22, 14, 12, -6, 0, -2, 17, 21, 8, 4, -1, -21, -11, -10, -8, 8, -6, -11, 33, -14, 0, 8, 10, 4, 18, 0, 11, 18, -3, 3, 20, -12, -20, -10, 3, -3, 4, 17, 17, -30, -21, -73, -2, -5, -14, 13, -15, 22, -15, 25, 2, 24, -15, 1, 29, -11, 23, 14, 1, 13, 18, -24, 12, -12, -19, -9, -13, 74, 50, -3, -15, 3, 13, -12, -47, 10, 12, 26, -20, -18, -24, -10, 11, 8, -35, -7, -4, 0, 8, -6, 3, 15, -3, 23, 9, 14, -13, -28, 17, -15, 13, 8, -19, 4, -21, 16, 8, -2, -14, 12, -28, -14, 22, 1, -1, 15, 13, 23, -13, 10, -22, 19, -30, 11, -1, -10, -12, -1, -14, 0, -19, 4, 14, -23, -5, 5, 3, -127, 1, 7, 2, -5, 7, 20, 20, -18, 12, -1, -2, -2, 0, 9, 23, -1, -11, 23, -11, 8, 11, 87, -5, 0, 19, -4, 15, 3, -7, 13, 10, 13, -27, -4, 15, -15, 27, 6, -14, -19, 5, 0, -8, 47, -17, -11, -14, 13, 33, -20, -16, 1, -4, 16, 5, -16, -18, -21, -20, 2, -34, -16, 5, 27
                        ]
                    },
                    {
                        "tokensUsed": {
                            "inputTokens": 7
                        },
                        "vector": [
                            -12, 0, 21, 5, -8, 12, 21, 11, 0, 0, -5, -36, 31, 7, -8, -4, 3, 15, -40, 15, -5, -1, -12, -13, -6, 19, -4, -6, -20, -62, -10, -15, 8, 4, 2, -15, -19, 13, 0, 4, 12, 3, -15, -28, -18, -23, 3, -2, 33, -21, 17, 8, -4, 11, 3, 14, 25, 15, 6, 3, -6, 10, -107, 35, 31, 11, -31, -9, 26, 26, -17, 16, 22, 10, -5, -16, -17, -13, -3, -13, 16, -2, -6, 3, -20, -12, 15, -19, 26, 0, -8, -11, -7, 17, -32, 1, 21, 4, -5, 127, -42, 11, 16, -1, -4, -16, -4, -31, 7, 2, 4, -11, 32, -24, 1, -4, 6, 3, -14, -1, 6, 8, 22, -2, 14, -40, 34, 48, 6, -9, 26, -22, -9, -28, 9, -5, -5, 6, -6, -30, -11, -48, -18, -49, 7, 22, -26, 20, -5, 21, -6, 22, -16, -7, 23, 21, 9, 38, -16, 0, 11, -27, 6, 7, -8, -31, -7, 10, 5, 1, 5, 0, -6, -2, 15, -5, -18, 7, 1, -2, 16, -19, -13, 30, 5, 0, -13, -3, 21, 21, 0, 0, 5, -9, -22, 0, -16, 0, 0, -15, 6, 0, -11, 3, 0, 28, 14, -16, 6, 6, -2, -4, 19, -6, 0, 8, 0, 22, -6, 9, 20, -56, -21, -72, 0, 5, -35, 19, -16, 0, -13, 26, 10, 52, -22, -7, 0, 1, 15, 17, 3, 5, 9, -9, 25, -25, -28, 17, -4, 103, 42, 15, -22, 5, 9, -4, -43, 23, 8, 17, -9, -26, -24, -9, 18, 14, -15, -6, -2, -14, -3, -11, -1, 18, -15, 17, -2, 11, -14, -38, 21, -10, 1, 24, -16, 19, -24, 8, 2, -9, -26, 5, -16, -6, 13, -20, -12, 23, 10, 14, -17, -13, -12, 22, -38, 21, 8, 2, 10, 12, -3, 6, -26, 9, 7, -20, -19, 8, -8, -121, 19, 8, 3, -9, 20, 26, 43, -19, 17, 5, 2, 2, -4, 19, 15, -1, -5, 13, -3, -1, 26, 85, -21, 15, 28, -4, 10, 5, -2, 30, 19, 36, -29, 4, 13, -12, 38, 10, -10, -27, 4, 2, -29, 37, -6, -3, -18, 18, 27, -12, 8, -2, 0, 5, 18, -34, -4, -19, -15, 9, -46, -8, -12, 24
                        ]
                    },
                    {
                        "tokensUsed": {
                            "inputTokens": 8
                        },
                        "vector": [
                            -16, 3, 19, -7, 12, -12, 26, 19, -1, -4, 0, -30, 13, 26, 4, 4, -10, 25, -32, 22, 18, -9, 6, 0, 16, 16, -12, -2, -9, -82, -4, -11, 6, -11, 16, -23, -15, 3, -10, 21, 20, 12, -10, -12, -21, -6, -3, -28, 33, -30, 22, -7, 1, 7, 0, 4, 46, -2, 17, 32, 14, 10, -84, 39, 33, 16, -23, -19, 21, 24, -17, 15, 19, 11, 5, 2, 14, -17, -4, 9, -20, -25, -11, 3, -3, -14, 0, -18, 23, 6, 6, 2, 11, -11, -4, -4, 2, 11, -7, 101, -26, -1, 19, -16, -2, -14, 4, -10, -15, 14, 11, -7, 20, -18, 32, 7, 6, 8, -19, 8, 14, 3, 12, -1, 19, -17, 34, 67, 15, 3, 15, -34, -6, -11, 3, 12, 5, 15, 9, -17, -26, -41, -19, -36, 7, 35, -21, 12, -19, -1, 3, 21, -19, -3, 4, 29, 4, 46, -14, 3, 6, -6, -10, 3, 18, -16, -21, -4, 7, 8, 9, 6, -27, -4, 17, 2, 11, -3, 4, 4, 14, -15, -40, 22, 13, -9, -3, -9, 16, 25, 11, 0, -4, -8, -14, -10, -20, -2, -14, -14, 31, -11, -16, 17, 9, 24, -1, -10, 17, 22, -15, 13, 15, -7, -10, -4, 7, -2, 14, 15, 12, -33, -27, -79, -1, 0, -21, 20, -28, 3, -31, 13, -6, 15, -10, 10, 27, -14, 21, 11, 13, -2, 0, -8, 6, 5, -21, 18, -23, 93, 57, 7, -32, 12, 11, -14, -53, 8, 17, 16, -19, -13, -13, -14, 6, -4, -43, 7, -17, -16, -12, -25, 12, 11, -1, 16, 0, 19, -18, -33, 22, -18, 13, 6, -9, 18, -32, 17, 6, 5, -24, 15, -26, -17, 29, 8, -23, 13, 9, 17, -13, -12, -24, 27, -31, 13, -2, -12, -7, -8, -14, 0, -6, 11, 15, -27, -8, 1, -7, -127, 2, 16, 19, -9, 16, 11, 22, -9, 16, 19, -6, 0, -4, 4, 17, -3, -26, 16, -14, 10, -1, 88, -20, 8, 11, -1, 7, 23, -10, 30, -2, 18, -28, -1, -6, -25, 22, 2, 0, -30, -7, 11, -19, 40, -17, -30, -21, 17, 13, -16, 2, -1, -4, 6, -1, -22, -5, -12, -24, 9, -24, -26, 9, 24
                        ]
                    },
                    {
                        "tokensUsed": {
                            "inputTokens": 7
                        },
                        "vector": [
                            -13, 0, 7, 0, 10, 17, 30, -3, 4, 2, 9, -15, 17, 28, 7, -8, -6, 11, -32, 8, 1, -10, -12, -20, 20, 34, -18, -19, -28, -56, 0, -18, 14, -18, 8, -18, -14, -9, -4, 14, 14, -15, -6, -28, -29, 0, -3, -18, 30, -11, 12, -16, -3, 17, 7, 10, 21, 3, 19, 26, 9, 22, -94, 17, 20, 15, -14, -15, 14, 13, -13, 18, 6, 6, 8, 1, -9, -30, -6, -1, 2, -8, -9, 5, 2, -7, -1, -15, 20, 18, -21, 5, -12, 4, -23, -3, 16, -2, 13, 105, -16, 7, 25, -13, 0, -17, -4, -21, -8, 2, 6, 9, 24, -14, 10, -10, 10, 17, -16, 4, -6, -4, 24, 6, 3, -27, 44, 44, 14, -6, 19, -15, -8, -9, -4, -2, -1, 1, 8, -24, 0, -45, -5, -38, -1, 14, -15, 17, -12, 30, -5, 23, -16, 0, 15, 17, -5, 41, -4, 20, 14, -23, 0, 13, -3, -45, -6, 13, 11, 1, 17, 21, -18, 2, 45, 6, -7, 1, -4, 3, 25, -10, -30, 20, 15, -16, -13, 2, 10, 22, -5, 15, -18, -12, -22, -19, -19, -8, -8, -17, 14, -6, -10, 12, 10, 20, 8, -8, 5, 21, -11, 5, 24, -4, -26, 8, 13, 4, 5, 15, 11, -52, -33, -90, 11, 6, -18, 20, -23, 12, -6, 33, 12, 36, -17, 0, 1, -12, 6, 7, 13, -3, -12, -16, 7, -30, -22, 26, -2, 92, 41, 12, -29, -10, 0, -5, -54, 29, 9, 20, 1, -9, -12, 9, 6, -20, -31, 5, 0, -7, 3, -2, 3, 21, -26, 13, -7, 9, -18, -45, 22, -9, 10, -8, -4, 15, -9, 21, 4, 1, -26, 11, -30, -17, 21, -20, -24, 18, 19, 20, -17, -20, -26, 22, -35, 10, 26, -5, -1, 18, -9, -2, -27, 7, 17, 9, -18, 12, -6, -126, 17, 3, 1, 0, 25, 26, 35, -17, 8, 1, -2, -7, 9, 3, 14, 2, -6, 0, 2, 17, 10, 84, -10, 14, 18, 0, 15, 21, 0, 31, 6, 18, -10, 11, 11, -6, 26, 15, -17, -25, -10, 5, -13, 38, -23, -13, -38, 9, 1, -22, -14, -17, 23, -15, 0, -22, -17, -3, -7, 9, -31, 3, -5, 16
                        ]
                    },
                    {
                        "tokensUsed": {
                            "inputTokens": 9
                        },
                        "vector": [
                            -19, 23, 20, 9, 6, 3, 18, 7, -3, 4, -15, -34, 27, 8, 15, 3, 1, 28, -61, 5, 11, -2, -1, -3, 12, 21, -19, -6, -15, -98, -9, -15, 19, 11, 3, -11, -2, 18, 0, 3, 22, 9, -20, -23, -14, -25, -6, 0, 50, -15, 27, -4, -2, -2, 7, 6, 14, 14, 6, -2, -7, 25, -113, 37, 40, 15, -37, -14, 26, 28, -20, 18, 31, 29, -1, -10, -8, 0, -16, 2, 13, -21, -11, 16, -11, -8, 5, -15, 19, -1, 12, -14, 0, 16, -30, -1, 11, 13, -5, 124, -36, 3, 9, -8, 5, -17, -11, -38, -10, -9, -10, -13, 24, -15, 8, -27, 10, 6, -45, 0, 5, 8, 22, 12, 0, -35, 29, 61, 16, 1, 12, -28, 0, -5, 2, 0, -1, 9, 6, -24, -30, -46, -17, -51, 14, 41, -10, 1, -13, 19, 2, 20, -5, -6, 12, 22, 10, 34, -11, 0, 6, -10, -12, 19, -8, -3, -14, 6, -10, 0, 9, 4, -1, -13, 21, 6, -14, 0, 3, -3, 26, 2, -19, 34, 20, -5, -30, -8, 14, 19, 7, -6, -3, -28, -37, 8, -28, 2, 2, -20, 14, -15, -12, 0, 13, 25, -1, -18, 24, 13, -8, 4, 19, 2, -4, 5, -14, 14, -9, 9, 35, -62, -19, -81, 0, 7, -28, 36, -33, 4, -8, 19, 6, 33, -17, 16, 13, -2, 15, 22, 24, 14, 0, 1, 30, -16, -20, 25, -24, 122, 44, 8, -33, 17, 3, -8, -65, 16, 24, 14, -4, -18, -20, -22, 28, 10, -26, -5, -3, -6, 0, -23, 4, 16, -12, 16, 0, 10, -18, -35, 19, -12, 3, 25, -11, 8, -27, 8, 0, -1, -34, 13, -32, -12, 30, -28, -31, 13, 0, 8, -24, -5, -17, 36, -48, 13, 11, 0, 5, 5, -3, -2, -8, 8, 8, -50, -3, 31, -2, -126, 13, 8, -1, -23, 22, 14, 51, -16, 10, 21, 9, -11, -3, 27, 17, 0, 0, 23, -17, -1, 12, 73, -16, 2, 46, -2, -1, 19, 0, 22, 23, 23, -19, 4, 16, -8, 41, 0, -6, -21, 0, -17, -16, 36, -12, -1, -38, 12, 21, -6, 6, -4, -15, 0, 8, -42, -10, -33, -10, 7, -30, -26, 1, 15
                        ]
                    },
                    {
                        "tokensUsed": {
                            "inputTokens": 16
                        },
                        "vector": [
                            -25, -16, 24, -9, 4, -3, 24, 18, -5, -5, 3, -28, 9, 16, 4, 12, -1, 14, -23, 6, 17, -9, 10, -9, 10, 2, -11, 9, -11, -90, -1, -13, 3, -13, 15, -6, -17, 8, 10, 12, 11, 3, -10, -22, -22, -13, -10, -16, 24, -21, 15, -9, 1, 14, -7, -1, 34, -2, 9, 6, 28, -1, -69, 51, 34, 3, -36, -16, 7, 16, 0, 20, 8, 7, 13, 0, 5, -9, 2, 11, -21, -15, -15, 3, -11, -20, 7, -23, 24, 19, 2, 5, 1, -7, -4, -8, 0, 6, -7, 79, -26, 12, 22, 1, -2, -13, -1, -8, 0, 9, 11, -5, 37, -1, 3, 2, 7, 0, -25, 12, 15, 3, 20, -18, 14, -17, 28, 59, 19, 3, 16, -23, -1, -23, 4, 12, 4, 5, 11, -19, -18, -27, -8, -41, 3, 34, -20, 24, -13, 3, -10, 9, 0, -1, 10, 12, -6, 38, -10, 4, 4, -5, -18, -10, 28, -35, -18, -2, 7, 2, 15, 18, -34, 6, 15, -1, -4, -20, 4, 18, 15, -16, -23, 20, 2, -15, -6, -10, 15, 30, 0, 15, -3, -16, -10, -13, -5, 6, -8, -8, 25, -13, -6, 7, 14, 10, 6, -3, 16, 11, -3, 6, 23, -14, -12, -11, 1, 0, 5, 12, 20, -16, -18, -75, 3, -2, -20, 20, -13, 17, -18, 17, 5, 17, -10, 2, 27, -9, 15, 15, 6, 3, 17, -13, 11, -10, -24, -7, -18, 72, 43, 9, -27, 0, 13, -5, -48, 16, 13, 20, -22, -14, -24, -16, 9, 6, -36, 0, -11, -2, 10, -6, 11, 7, 2, 12, 0, 23, -11, -12, 16, -8, 15, -3, -13, 7, -15, 21, 17, 6, -24, 12, -22, -14, 26, 5, -18, 14, 12, 18, -13, -4, -21, 28, -31, 23, 8, -2, -7, 2, -13, -2, -22, 3, 14, -29, -12, -4, -6, -126, 3, 6, 0, -12, 16, 18, 30, -11, 1, 4, 4, 10, 5, 0, 15, -4, -11, 14, -17, 8, 9, 92, -5, -4, 5, -5, 10, 3, -3, 15, -7, 0, -20, 2, 8, -18, 17, 12, -10, -25, 1, -22, -13, 53, -25, -17, -23, 27, 24, -17, -23, 1, 0, 11, 8, -24, -11, -23, -17, -1, -27, -14, 14, 31
                        ]
                    }
                ],
                "stats": {
                    "totalChunks": 6,
                    "totalInferences": 6
                }
            }
        ]
    }

    Error Response for GET request

    The following is an example of an error response:

    {
    	"chunkingId": "899df453-4ac9-4c2c-c8f1-8fcd7a9687f3",
        "status": "ERROR",
    	"message": "Unable to return results."
    }