Skip to Content

Batch Renders API

This section covers everything you need to know about API based batch rendering in Plainly. Batches are helpful when you want to create multiple renders for a single project.

Overview

Batch rendering lets you create many renders with a single request. It helps to conserve rate limits, reduces the number of network calls and keeps related renders grouped together.

A batch requires a project and one or more templates. When multiple templates are defined, then each batch data entry is rendered for each given template. This means if you supply two templates and 10 batch data entries, the API will create 20 individual renders. If no template is specified, the project’s default template is used.

As shown in the diagram below, every entry in the batch still results in an individual render object, so webhooks, integrations and any follow-up operations such as cancel or delete continue to apply per render.

Each render in the batch is automatically assigned with the following attributes:

  • batchRenderId - A unique identifier of the batch.
  • batchRenderSequence - The sequence number of the render within the batch.
  • batchRenderGeneratedId - Internal identifier which is always true.

Creating a batch

Send a POST request to /api/v2/batch with shared render options and an array of batchData entries. Each entry defines the following properties and merges the batch-level settings with its own parameters and optional overrides.

NameTypeDefault
parametersRecord<string, any>

The parameters to use for the render.

attributesRecord<string, any>

Custom attributes defined by the user.

integrationsPassthroughstring

Optional integration passthrough data to include when calling integrations for this entry. If not specified, the shared integration settings will be used.

webhookPassthroughstring

Optional webhook passthrough data to include in the webhook payload for this entry. Webhook URL must be defined in shared settings for this to have effect. If not specified, the shared webhook settings will be used.

attachmentFileNamestring

Optional file name without extension for video(s) created by this entry. If not specified, the shared output format settings will be used.

uploads{ signedUrl?: { output: { url: string; headers?: Record<string, string>; }; }; }

Optional upload options for this entry. When provided, overrides the batch-level upload options defined in options.uploads. This allows specifying a different signed URL destination for each render in the batch.

Batch creation is synchronous

The endpoint responds only after all renders are successfully created, so the call may take noticeable time for larger batches. Ensure to increase your HTTP client timeout settings accordingly.

All or nothing

In case of receiving an error response from the API, no renders from the batch will be created.

batch-render.sh
curl -X POST \ -H "Content-Type: application/json" \ -u "$PLAINLY_API_KEY:" \ -d '{ "projectId": "media-abstract@v1", "templateIds": ["square"], "reference": "my-internal-ref", "webhook": { "url": "https://example.com/callback" }, "batchData": [ { "parameters": { "image": "https://picsum.photos/1920/1920", "newsCta": "Book a demo", "newsHeading": "Alice", "newsSubheading": "Create personalized videos on autopilot", "newsLogo": "https://storage.googleapis.com/plainly-static-data/plainly-logo-black.png" }, "webhookPassthrough": "alice-1" }, { "parameters": { "image": "https://picsum.photos/1920/1920", "newsCta": "Start trial", "newsHeading": "Bob", "newsSubheading": "Try Plainly free for 14 days", "newsLogo": "https://storage.googleapis.com/plainly-static-data/plainly-logo-black.png" }, "webhookPassthrough": "bob-2" } ] }' \ https://api.plainlyvideos.com/api/v2/batch

The returned JSON response will contain the ID of the new batch and all the other properties of the Batch Render object. You can always use this ID to execute a GET request to /api/v2/batch/{batchId} to obtain information about the batch, or a GET request to /api/v2/batch/{batchId}/renders to list all renders created within the batch.

Create batch example JSON response

{ "id": "ba5b10be-b950-499b-b141-2b9a645c074b", "createdDate": "2025-08-29T11:54:24.915Z", "projectId": "media-abstract@v1", "reference": "my-internal-ref", "totalRenders": 2, "publicDesign": true }

Check the Batch Renders API reference  for a complete operation reference.

The API creates one render for each entry in batchData. You can track and manage those renders individually using the standard Render endpoints.

Per-entry options

Beyond parameters and attributes, each batch data entry can override several batch-level settings. When a value is provided on an entry it takes precedence over the corresponding shared setting for that particular render.

FieldOverridesDescription
webhookPassthroughShared webhook passthroughCustom data included in the webhook payload for this entry. The webhook URL must still be defined at the batch level.
integrationsPassthroughShared integration passthroughCustom data forwarded to integrations for this entry.
attachmentFileNameShared output format file nameFile name (without extension) for the video produced by this entry.
uploadsoptions.uploadsUpload options for this entry, allowing each render to be uploaded to a different signed URL destination.

The example below shows a batch where each entry specifies its own webhookPassthrough, attachmentFileName, and uploads:

batch-render-with-overrides.sh
curl -X POST \ -H "Content-Type: application/json" \ -u "$PLAINLY_API_KEY:" \ -d '{ "projectId": "media-abstract@v1", "templateIds": ["square"], "webhook": { "url": "https://example.com/callback" }, "batchData": [ { "parameters": { "newsHeading": "Alice" }, "webhookPassthrough": "alice-1", "attachmentFileName": "alice-promo", "uploads": { "signedUrl": { "output": { "url": "https://storage.example.com/alice-video?signature=abc", "headers": { "x-ms-blob-type": "BlockBlob" } } } } }, { "parameters": { "newsHeading": "Bob" }, "webhookPassthrough": "bob-2", "attachmentFileName": "bob-promo", "uploads": { "signedUrl": { "output": { "url": "https://storage.example.com/bob-video?signature=def" } } } } ] }' \ https://api.plainlyvideos.com/api/v2/batch

Open batches

Open batches decouple batch creation from batch data submission. Instead of sending all entries in the request body, you create the batch with its shared options, upload a JSONL file with the entries to a signed URL, and Plainly ingests that file in chunks over time. This lifts the size ceiling of a single HTTP request and moves pacing from your code into the platform.

Open batches produce the same individual renders as synchronous batches. Each entry still becomes a standalone render with the batch related attributes, and webhooks, integrations, and per-render operations work exactly the same way.

When to use open batches

Open batches are a good fit when:

  • You need to submit more entries than fit in a single HTTP request, for example tens or hundreds of thousands of renders generated from a CSV, data warehouse export, or upstream pipeline.
  • Your batch size exceeds the render queue or throttled-job limits and you want Plainly to pace submissions for you instead of scheduling retries on your side.
  • You want to trigger a batch from one system and feed data from another. The upload is a plain HTTPS PUT to a signed URL, so any component that can make an HTTPS request can deliver the file.
  • You do not require the all-or-nothing semantics of a synchronous batch — open batches commit entries incrementally as they are ingested.

If you are submitting a small, fixed set of entries and want a single synchronous response, stick with the regular batch endpoint described above.

How it works

The lifecycle of an open batch is:

  1. Your client calls POST /api/v2/batch/open with shared render options. No batchData is sent.
  2. Plainly responds with a Batch Render object that includes an openBatch block containing a signed upload URL and its expiration timestamp.
  3. Your client uploads a batch data file — a stream of JSON objects — to that URL before it expires. Each object is one batch data entry.
  4. Plainly reads the file in chunks on a regular tick, converts each entry into a render, and submits them through the standard render pipeline.
  5. Track ingestion progress via the openBatch block on the Batch Render object. See Ingestion progress for the full shape of the openBatch block.

Creating an open batch

Send a POST request to /api/v2/batch/open with the same shared options as a synchronous batch, but without the batchData field.

open-batch-create.sh
curl -X POST \ -H "Content-Type: application/json" \ -u "$PLAINLY_API_KEY:" \ -d '{ "projectId": "media-abstract@v1", "templateIds": ["square"], "reference": "customers.csv", "webhook": { "url": "https://example.com/callback" } }' \ https://api.plainlyvideos.com/api/v2/batch/open

The response contains the newly created batch together with an openBatch block:

Create open batch example JSON response

{ "id": "ba5b10be-b950-499b-b141-2b9a645c074b", "createdDate": "2026-04-17T10:00:00.000Z", "projectId": "media-abstract@v1", "reference": "customers.csv", "templateIds": ["square"], "totalRenders": 0, "publicDesign": true, "openBatch": { "uploadUrl": "https://storage.googleapis.com/plainly-batch-uploads/...signature...", "uploadUrlExpiryDate": "2026-04-17T11:00:00.000Z", "finished": false, "errorCode": null, "errorMessage": null, "ingestionStartedDate": null, "ingestionEndedDate": null, "ingesting": false } }

Upload URL is short-lived

The signed upload URL is valid for one hour from the moment the batch is created. If no file is uploaded before uploadUrlExpiryDate, the batch is finalized with an error.

Check the Open Batch API reference  for the full operation reference.

Uploading the batch data file

The uploaded file must contain a stream of JSON objects, with each object matching the batch data entry schema described in Creating a batch. The same fields are supported (parameters, attributes, webhookPassthrough, integrationsPassthrough, attachmentFileName, uploads). Formatting the file as JSON Lines  (one object per line) is recommended for readability, but line breaks are not required for parsing.

batch-entries.jsonl
{"parameters":{"newsHeading":"Alice","newsCta":"Book a demo"},"webhookPassthrough":"alice-1"} {"parameters":{"newsHeading":"Bob","newsCta":"Start trial"},"webhookPassthrough":"bob-2"} {"parameters":{"newsHeading":"Carol","newsCta":"Learn more"},"webhookPassthrough":"carol-3"}

Upload the file with a plain HTTPS PUT to the signed URL returned when the batch was created:

open-batch-upload.sh
curl -X PUT \ --data-binary @batch-entries.jsonl \ "$OPEN_BATCH_UPLOAD_URL"

Notes on the upload:

  • The signed URL accepts a single PUT. Do not send authentication headers — the signature in the URL is the credential.
  • There is no content-type requirement for the upload request, but ingestion expects valid JSON content: a sequence of JSON objects like the examples above.
  • Blank lines and whitespace between entries are tolerated. The parser uses JSON tokens, not line breaks, to delimit entries, so invalid JSON will cause ingestion to fail with the relevant ingestion error.
  • The upload must complete before uploadUrlExpiryDate.

Adding more entries to an already-uploaded batch is not supported. If you need to submit additional data, create a new open batch.

Ingestion progress

Ingestion is asynchronous, so progress is observed by polling GET /api/v2/batch/{batchId}. The response includes an openBatch block with the following fields:

NameTypeDefault
uploadUrlstring

URL for uploading the batch data file.

uploadUrlExpiryDatestring

Expiry date of the upload URL.

finishedboolean

Whether the open batch has finished processing. false while ingestion is in progress, true once the file has been fully consumed or the batch has terminated with an error.

ingestingboolean

true after the first chunk has been processed and while ingestion is still running. false before it starts or once finished.

ingestionStartedDatestring

Instant when ingestion first began. null until the first chunk is processed.

ingestionEndedDatestring

Instant when ingestion ended, on success or error. null while ingestion is still in progress.

errorCodestring

Error code if the open batch terminated with an error, null otherwise.

errorMessagestring

Human-readable error message matching errorCode, null otherwise.

Alongside the openBatch block, the totalRenders field on the batch grows with every ingested chunk and equals the final entry count once ingestion ends.

You can also list renders that have already been created at any time via GET /api/v2/batch/{batchId}/renders. Renders appear as soon as Plainly ingests them and proceed through the standard render lifecycle independently — some renders may already be DONE while other entries are still waiting to be ingested.

Plainly ingests entries in chunks on a scheduled tick. By default, up to 250 entries per minute are ingested per open batch. Very large files are processed over many ticks, and ingestion automatically respects your organization’s render queue and throttled-job limits so you do not need to implement client-side throttling.

Error handling

Open batches are not all-or-nothing

Unlike synchronous batches, entries that were already ingested before an error occurred remain in the system and continue through the standard render flow. An error terminates further ingestion of the file and sets openBatch.finished to true, with openBatch.errorCode and openBatch.errorMessage populated accordingly.

The following terminal error types may appear on openBatch.errorCode:

TypeDescriptionError code
Missing uploadThe upload URL expired and no file was ever uploaded to it. The batch is finalized with zero renders.BATCH_RENDER_OPEN_FILE_MISSING
Entry parse errorAn entry in the uploaded file could not be parsed as a JSON object. Ingestion stops at the offending position; any renders created by previous chunks remain.BATCH_RENDER_OPEN_ENTRY_PARSE_ERROR
Usage exhaustedYour organization has no more render usage available at the time a chunk is ingested. Renders created by previous chunks remain.USAGE_RESOURCE_NOT_AVAILABLE
Render creation errorAn entry failed to be converted into a render for a non-deterministic reason (for example the project template was deleted at some point during ingestion).Depends on the underlying cause (e.g. render validation)

Behaviors that are not treated as errors:

  • Organization at render queue or throttled-job limit. If your render queue is full or your throttled-job cap is reached, Plainly does not fail the batch — it skips the tick and tries again on the next one. Ingestion may pause for extended periods under sustained load but the batch stays open.
  • Slow upload. As long as the file is uploaded before uploadUrlExpiryDate, Plainly will pick it up on the next tick after it lands.

A good operational practice is to validate your batch data file locally before uploading (every entry parses as a JSON object and has the fields your project expects) so that a single bad entry does not cut a large batch short.

Example code snippets (Node.js)

These snippets illustrate the basic pattern of sending JSON and handling the result. In production you would add error checking, retries and other best practices.

Triggering a batch

import axios from "axios"; async function triggerBatch( projectId: string, templateIds: string[], batchData: Array<Record<string, unknown>>, additionalConfig?: object, ) { const response = await axios.post( "https://api.plainlyvideos.com/api/v2/batch", { projectId, templateIds, batchData, ...(additionalConfig || {}), }, { auth: { username: process.env.PLAINLY_API_KEY!, password: "", }, headers: { "Content-Type": "application/json", }, }, ); return response.data; }

Triggering an open batch

import axios from "axios"; import { createReadStream, statSync } from "fs"; async function triggerOpenBatch( projectId: string, templateIds: string[], jsonlFilePath: string, additionalConfig?: object, ) { // 1. Create the open batch and retrieve the signed upload URL. const createResponse = await axios.post( "https://api.plainlyvideos.com/api/v2/batch/open", { projectId, templateIds, ...(additionalConfig || {}), }, { auth: { username: process.env.PLAINLY_API_KEY!, password: "", }, headers: { "Content-Type": "application/json", }, }, ); const batch = createResponse.data; const { uploadUrl } = batch.openBatch; // 2. Upload the JSONL file with a plain HTTPS PUT — no auth headers needed. await axios.put(uploadUrl, createReadStream(jsonlFilePath), { headers: { "Content-Length": statSync(jsonlFilePath).size, }, maxBodyLength: Infinity, }); // 3. Return the batch record — poll GET /api/v2/batch/{batchId} to track progress. return batch; }

Additional operations

You can also perform the following operations related to batch renders using the API:

Best practices

Same best practices apply to batch renders as to single renders. In addition, consider the following:

  • Control batch size: Keep an eye on the number of renders in a batch. Split into smaller batches if you notice performance issues.
  • Increase HTTP timeouts: Batch creation API call can take a long time, consider increasing the HTTP timeout settings to avoid premature termination.