GCS Sink Node
Quick Reference
Use Credentials Select a GCP credential used to write to the bucket. Optional — if no credential is selected, the node uses the workflow runtime's Application Default Credentials.
Bucket Name
The Google Cloud Storage bucket the node writes to.
ex: my-data-lake
Object Prefix
Prefix prepended to each generated object name. Defaults to events/.
ex: workflow-output/
Batch Size
How many records the node collects before writing a batch as a single object in GCS. Defaults to 100.
The gcssink node writes workflow records to a Google Cloud Storage bucket as newline-delimited JSON objects, batching records to reduce the number of GCS write operations.
Configuration
| Field | Description | Required | Placeholder |
|---|---|---|---|
| Use Credentials | Select or create a GCP credential. The credential stores either a service account JSON key, an OAuth access token, or Application Default Credentials. | No | gcs-credential |
| Bucket Name | Name of the Google Cloud Storage bucket the node writes to. | Yes | my-data-lake |
| Object Prefix | Prefix prepended to every generated object name. Use it to organize the written files into a folder. | No | events/ |
| Batch Size | Number of records buffered in memory before the node flushes them as a single object to GCS. Minimum: 1. | No | 100 |
Use Credentials
Select an existing GCP credential from the dropdown or create a new one. Supported authentication types:
- Service Account JSON key — paste the full JSON keyfile contents.
- Access Token — provide a short-lived OAuth
ya29.*token. - Application Default Credentials — used automatically when no credential is selected and the workflow runtime has ADC available.
The credential needs the Storage Object Creator role (or equivalent write access) on the target bucket.
Bucket Name
The exact name of the GCS bucket. The bucket must exist before the workflow runs.
Object Prefix
The prefix is prepended to every object the node writes. Final object names use the pattern:
{prefix}{yyyy/MM/dd/HH-mm-ss}-{uuid}.jsonl
For example, with prefix events/, an object might be named events/2026/04/27/12-34-56-3f1a8b9d-....jsonl. Include a trailing / if you want the prefix to act as a folder.
Batch Size
Records accumulate in memory until the batch reaches this size, at which point the node serializes them as newline-delimited JSON and uploads the batch as a single GCS object. Larger batches reduce the number of GCS write operations but increase memory usage and end-to-end latency.
Examples
Example: Archive Workflow Output
Use this node when you want to persist every record processed by the workflow to GCS for long-term storage or downstream processing.
- Select the credential authorized to write to the archive bucket.
- Enter the bucket name.
- Set Object Prefix to a folder that identifies the workflow (for example
archive/orders/). - Increase Batch Size if the workflow is high-volume and you want fewer, larger files in GCS.
Example: Date-Partitioned Output
The default object naming already includes a yyyy/MM/dd path component, so a prefix such as landing/ produces day-partitioned folders without any extra configuration. Downstream tools that expect Hive-style partitions can read directly from the produced layout.
Output Format
The sink writes objects as newline-delimited JSON (.jsonl) with the content type application/x-ndjson. Each line in the file is one record from the workflow batch.
Related Nodes
- GCS Source: Read objects from a Google Cloud Storage bucket
- S3 Sink: Write workflow records to AWS S3
- Kafka Sink: Stream workflow records to a Kafka topic