Amazon S3 destination plugin

Amazon S3 is a highly scalable and durable object storage service provided by Amazon Web Services (AWS). The S3 destination plugin in Calyptia Core allows you to store and archive your data by sending it directly to your Amazon S3 bucket. With this plugin, you can configure your pipeline to store various data types such as logs, metrics, traces, and events, in your S3 bucket for long-term storage or archival purposes. The S3 destination plugin provides a flexible and customizable way to integrate your data with your S3 bucket, letting you tailor your storage and archival strategies to meet your specific needs.

Configuration parameters

The Amazon S3 destination plugin provides these configuration parameters.


RegionThe AWS region of your S3 bucket.
BucketS3 Bucket Name.
Total File Size (Bytes)Specifies the size of files in S3. Maximum size is 50 GB, minimum is 1 MB.

AWS authentication

AWS Shared Credential FileSpecifies the shared credential file to use when uploading if not using AWS ARN.
IAM Role ARNARN of an IAM role to assume (ex. for cross account access).
S3 Object ACL PolicyPredefined Canned ACL policy for S3 objects.
S3 API EndpointCustom Endpoint for the AWS S3 API.
STS API EndpointCustom Endpoint for the STS API.
External ID for STS APISpecify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.


Use Put ObjectUse the S3 PutObject API, instead of multipart upload API.
Send Content-MD5 headerSend Content-MD5 header with object uploads as is required when Object Lock is Enabled.
Preserve Data OrderingNormally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads.
Log KeyBy default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3.
Storage ClassSpecify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class.
Store DirDirectory to locally buffer data before sending. Plugin uses the S3 Multipart upload API to send data in chunks of 5 MB at a time- only a small amount of data will be locally buffered at any given point in time.
S3 Key FormatFormat string for keys in S3. This option supports strftime time formats and a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag filter. Add $TAG in the format string to insert the full log tag; add $TAG[0] to insert the first part of the tag in the S3 key. The tag is split into parts using the characters specified with the s3_key_format_tag_delimiters option. Add $INDEX to enable sequential indexing for file names. Adding $INDEX will prevent random string being added to end of key when $UUID is not provided. See the in depth examples and tutorial in the documentation.
S3 Key Format Tag DelimitersA series of characters which will be used to split the tag into parts for use with the s3_key_format option. See the in depth examples and tutorial in the documentation.
Use Static File PathDisables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formats, $TAG, and other dynamic key formats all work as expected while this feature is set to true.
Enable Auto Retry RequestsImmediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues.
JSON Date FormatSpecify the format of the date, supported formats: double, iso8601 (for example, 2018-05-30T09:39:52.000681Z), java_sql_timestamp (for example, 2018-05-30 09:39:52.000681, which can be used with AWS Athena), and epoch.
JSON Date KeySpecifies the name of the date field in output.
Upload Chunk Size (Bytes)This plugin uses the S3 Multipart Upload API to stream data to S3, ensuring your data is uploaded as quickly as possible. This parameter configures the size of each part in the upload. The total_file_size option configures the size of the file you will see in S3; this option determines the size of chunks uploaded until that size is reached. These chunks are temporarily stored in chunk_buffer_path until their size reaches upload_chunk_size, which point the chunk is uploaded to S3. Default: 5M, Max: 50M, Min: 5M.
Upload TimeoutOptionally specify a timeout for uploads. Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file in S3 every hour. Default is 10m.

Advanced networking

DNS ModeSelect the primary DNS connection type (TCP or UDP).
DNS ResolverSelect the primary DNS connection type (TCP or UDP).
Prefer IPv4Prioritize IPv4 DNS results when trying to establish a connection.
KeepaliveEnable or disable Keepalive support.
Keepalive Idle TimeoutSet maximum time allowed for an idle Keepalive connection.
Max Connect TimeoutSet maximum time allowed to establish a connection, this time includes the TLS handshake.
Max Connect Timeout Log ErrorOn connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message.
Max Keepalive RecycleSet maximum number of times a keepalive connection can be used before it is retired.
Source AddressSpecify network address to bind for data traffic.

Shared credential file

Your shared credential file provides authentication credentials to the Amazon S3 destination plugin. This file must be an AWS credentials file (opens in a new tab) that includes an aws_access_key_id parameter and an aws_secret_access_key parameter. For example:


To reference this file in your plugin configuration, use the following syntax:

{{ files.NAME }}

Replace NAME with the name of your credentials file.

Bucket policies

To use the Amazon S3 destination plugin, you must grant the plugin write access to your S3 buckets. These bucket policies (opens in a new tab) are managed within Amazon S3, not within the plugin's own configuration.

For example, the following bucket policy lets the Amazon S3 destination plugin send data to a bucket named my-bucket:

Bucket policy
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": "s3:PutObject",
        "Resource": [