The Call Pipeline index stage calls another index pipeline.
You can use this stage to reuse pipeline logic across multiple pipelines.
You can also use it to index certain data separately from the rest, update a data model, or distribute indexing across multiple collections or pods.In the context of an index pipeline, the Call Pipeline stage creates a “fork” that runs another pipeline in parallel with the main pipeline.
The called pipeline does not return any data to the pipeline that called it, so it should end with a stage that writes the output to a collection, a data model, or some other endpoint.
Note that this is different than Call Pipeline stages in query pipelines, where the called pipeline does return its output to the pipeline that called it.In this example, the main pipeline calls a pipeline that indexes some metadata separately from the main document collection:As another example, multiple index pipelines can end with a Call Pipeline stage, and the called pipeline can perform some final processing before indexing the documents; this is a way to reuse pipeline logic so that data from different datasources can be indexed in the same consistent format:
Use a naming convention for your pipelines that lets you easily differentiate between your main pipelines and the ones you are using as call pipelines.
For example, you can add a suffix like _cpl to differentiate your call pipelines from other pipelines.
When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.