📄️ Eval Node
The Fleak Eval Node allows you to transform and manipulate events using the Fleak Eval Expression Language (FEEL). Use it to create new fields, modify existing values, and perform calculations on your
📄️ Filter Node
The filter node allows you to selectively process events in your ZephFlow pipeline based on specific conditions. It evaluates each incoming event against an expression written in the Fleak Eval Expres
📄️ Parser Node
The Parser Node is used to transform raw text data (strings) into structured data (JSON objects). This is an essential step in log processing, allowing you to extract specific fields like IP addresses
📄️ SQL Node
The Fleak SQL Node allows you to manipulate and transform data within your data workflows using PostgreSQL-compatible SQL syntax. Built on FleakSQL, a custom-built engine designed specifically for dat
📄️ Kafka Source Node
The Kafka Source node allows your workflow to ingest real-time streaming data from an Apache Kafka cluster. It acts as a starting point for your data pipeline, continuously polling a specified topic f
📄️ Kafka Sink Node
The Kafka Sink Node allows you to write processed data records from your Fleak workflow directly into an Apache Kafka topic.
📄️ SQS Source Node
The SQS Source node allows your workflow to continuously read messages from an Amazon SQS queue. It acts as the starting point of your pipeline, polling the queue for new messages and passing each one
📄️ SQS Sink Node
The SQS Sink node sends records from your workflow as messages to an Amazon SQS queue. Each record in the pipeline becomes one SQS message. Messages are sent in batches for efficiency. Both standard a
📄️ JDBC Source Node
The JDBC Source node reads data from a relational database and passes each row as a record into your workflow. It supports both batch mode (reads all rows once and stops) and streaming mode (continuou
📄️ JDBC Sink Node
The JDBC Sink node writes records from your workflow into a relational database table. It supports inserting new rows or upserting (insert or update) based on a key you define. Records are written in
📄️ GCS Source Node
The `gcssource` node reads objects from a Google Cloud Storage bucket and emits each object's decoded content as workflow records.
📄️ GCS Sink Node
The `gcssink` node writes workflow records to a Google Cloud Storage bucket as newline-delimited JSON objects, batching records to reduce the number of GCS write operations.
📄️ Elasticsearch Source Node
The `elasticsearchsource` node queries an Elasticsearch index using the scroll API and emits each matching document as a workflow record.
📄️ Elasticsearch Sink Node
The `elasticsearchsink` node writes workflow records to an Elasticsearch index using the Bulk API, batching records to reduce request overhead.
📄️ Azure Blob Storage Source Node
The `azureblobsource` node reads blobs from an Azure Blob Storage container and emits each blob's decoded content as workflow records.
📄️ Azure Blob Storage Sink Node
The `azureblobsink` node writes workflow records to an Azure Blob Storage container as newline-delimited JSON blobs, batching records to reduce the number of blob write operations.
📄️ Splunk Source Node
The Splunk Source connector allows you to ingest data directly from Splunk Enterprise or Splunk Cloud Platform into your workflow. It functions by submitting a search job to the Splunk API and retriev
📄️ LDAP Source Node
The LDAP Source node reads entries from an LDAP directory or Active Directory server and passes each matching entry as a record into your workflow. Common use cases include importing user accounts, gr
📄️ S3 Output Node
The S3 Sink node writes workflow output data to AWS S3 via a configured data asset, with optional batching for high-volume streams.
📄️ Delta Sink Node
The Delta Lake Sink Node enables you to write processed workflow data directly into a physical storage location in the Delta Lake table format.
📄️ Databricks Sink Node
The Databricks Sink Node allows you to ingest processed data directly into Databricks Unity Catalog tables. Unlike direct file writers, this node leverages a Databricks SQL Warehouse to ensure ACID co