Elasticsearch Sink Node
Quick Reference
Use Credentials Select a Username/Password credential for Elasticsearch. Optional — leave empty for an unauthenticated cluster.
Host
Base URL of the Elasticsearch cluster.
ex: https://es.example.com:9200
Index
The Elasticsearch index where records are written.
ex: workflow-events
Batch Size
How many records the node collects before sending a bulk request to Elasticsearch. Defaults to 500.
The elasticsearchsink node writes workflow records to an Elasticsearch index using the Bulk API, batching records to reduce request overhead.
Configuration
| Field | Description | Required | Placeholder |
|---|---|---|---|
| Use Credentials | Select or create a Username/Password credential. The username and password are sent as HTTP Basic authentication. Leave empty for clusters that do not require authentication. API-key auth is not supported. | No | es-writer |
| Host | Base URL of the Elasticsearch cluster. Include the protocol and port. | Yes | https://es.example.com:9200 |
| Index | Name of the index where the records are written. The index must already exist or the cluster must be configured to create it on demand. | Yes | workflow-events |
| Batch Size | Number of records buffered before the node sends a bulk request to Elasticsearch. Minimum: 1. | No | 500 |
Use Credentials
Select an existing Username/Password credential from the dropdown or create a new one. The connector authenticates against Elasticsearch using HTTP Basic auth — API-key authentication is not supported. The user must have permission to write to the target index (create and index privileges).
Host
The base URL of your Elasticsearch cluster. Format: scheme://host[:port]. Include the protocol (http or https). Trailing slashes are stripped automatically.
Index
The destination Elasticsearch index. Each record in a batch is sent with an index action targeting this index, so the cluster assigns a document ID automatically.
Batch Size
Records accumulate in memory until the batch reaches this size, at which point the node serializes them as NDJSON and POSTs the batch to the /_bulk endpoint. Larger batches reduce request overhead; smaller batches lower memory usage and keep latency tighter.
Examples
Example: Stream Workflow Events into Elasticsearch
Use this node when downstream consumers query Elasticsearch and you want every record produced by the workflow indexed there.
- Select the credential authorized to write to the destination index.
- Enter the cluster URL in Host.
- Set Index to the destination — for example
workflow-events. - Increase Batch Size for higher throughput, or lower it if you want fresher data in Elasticsearch.
Example: Per-Environment Indexes
Configure separate Elasticsearch Sink nodes for different environments by pointing each at its own Index name (for example events-staging and events-prod). The credential, host, and batch settings can stay the same.
Error Handling
The node uses the Elasticsearch Bulk API. When the cluster returns a partial-failure response, successfully indexed records are counted as success and only the failed records are routed to the configured error handling. A non-200 HTTP response from /_bulk causes the entire batch to be reported as failed.
Related Nodes
- Elasticsearch Source: Read documents from an Elasticsearch index
- Kafka Sink: Publish workflow records to a Kafka topic
- S3 Sink: Write workflow records to AWS S3