AWS, Cloud Computing, DevOps, Kubernetes

4 Mins Read

A Guide to Set Up Fluentd on Amazon EKS for Efficient Logging


Effective logging is a cornerstone of modern application infrastructure. It empowers organizations to monitor and troubleshoot issues, gain insights into system behavior, and ensure the smooth operation of their applications.

A robust logging solution is indispensable when managing applications on Amazon Elastic Kubernetes Service (EKS).

Fluentd, an open-source log collector and aggregator, seamlessly integrates with EKS to gather, process, and transmit logs to various destinations.

In this blog, we’ll walk you through setting up Fluentd on Amazon EKS for comprehensive logging.


Before we dive into the setup, let’s briefly introduce the key technologies involved: 

Amazon Elastic Kubernetes Service (Amazon EKS): Amazon EKS is a managed Kubernetes service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes on AWS. 

Fluentd: Fluentd is an open-source data collector that excels at unifying data collection and consumption. It supports a wide range of inputs and outputs and is highly extensible, making it an excellent choice for log collection in Kubernetes environments. 

Helm: Helm is a Kubernetes package manager that streamlines Kubernetes applications’ deployment and management. It allows you to define, install, and upgrade even the most complex Kubernetes applications. 

Pioneers in Cloud Consulting & Migration Services

  • Reduced infrastructural costs
  • Accelerated application deployment
Get Started


Before you begin, ensure you have the following prerequisites in place:

  1. An Amazon EKS cluster is up and running.
  2. AWS CLI installed and configured with the necessary permissions.
  3. kubectl installed and configured to interact with your EKS cluster.
  4. Helm installed on your local machine.

Step to Deploy Fluentd using Helm

Helm is a package manager for Kubernetes that simplifies the deployment of applications. To set up Fluentd on Amazon EKS, we’ll use Helm to install and configure Fluentd. Follow these steps:

Step 1: Open a terminal and add the Fluentd Helm repository

Step 2:  Update the Helm repository

Step 3: Create a values.yaml file and make the desired changes

Sources Configuration: This section defines how Fluentd collects log data from various sources.

  • @type tail: Specifies that Fluentd should use the “tail” input plugin, which allows it to read log files.
  • @id in_tail_container_logs: Assigns an identifier to this source, which can be used in labeling and routing log entries.
  • path /var/log/containers/*.log: Defines the path where Fluentd will look for log files. In this case, it’s set to collect log files from the /var/log/containers/ directory with a wildcard (*) to match all files with a .log extension.
  • pos_file /var/log/fluentd-containers.log.pos: Maintains a position file to keep track of the log file’s position, ensuring it doesn’t reprocess already collected logs.
  • tag kubernetes.*: Assigns a tag to the collected logs, which can be used for filtering and routing.
  • read_from_head true: Specifies that Fluentd should start reading from the beginning of the log files.
  • <parse>: Defines how Fluentd should parse log entries. In this case, it uses the multi_format parser to handle both JSON and regular expression formats.
  • <pattern>: Specifies log entry patterns to match and parse.
  • emit_unmatched_lines true: Ensures unmatched log lines are still emitted, even if they don’t match the specified patterns.

The 01_sources.conf section collects log data from container logs, specifically those located in /var/log/containers/.

Filters Configuration: This section focuses on processing and filtering the collected log data.

  • <label @KUBERNETES>: Assigns a label to the following filter and match directives, allowing you to group related filters and matches.
  • <match kubernetes.var.log.containers.fluentd**>: Defines a match directive to filter log entries that match the specified pattern. In this case, it routes log entries from the fluentd namespace to the @FLUENT_LOG label.
  • <filter kubernetes.**>: Uses the kubernetes_metadata filter to add Kubernetes metadata to log entries, enhancing the logs with information about the source pods and containers.
  • <match **>: Routes all remaining log entries to the @DISPATCH label for further processing.

In this section, you have the flexibility to add more filters to ignore log entries from specific namespaces, if needed.

Dispatch Configuration: This section dispatches log entries for further processing and routing.

  • <label @DISPATCH>: Assigns a label to the filter and matches directives within this section.
  • <filter **>: Uses the Prometheus filter to collect metrics about the incoming log records. It tracks the total number of records, tagging them with information about the log’s tag and hostname.
  • <match **>: Routes all log entries to the @OUTPUT label for final processing and sending to the output destination.

Output Configuration: This section defines the output destination for the processed log entries.

  • @type elasticsearch: Specifies the output plugin type as Elasticsearch.
  • host “elasticsearch-master”: Specifies the Elasticsearch host where Fluentd should send logs.
  • port 9200: Defines the Elasticsearch port.
  • path “”: Specifies the path for Elasticsearch, which is left empty in this example.
  • user elastic: Sets the Elasticsearch user.
  • scheme “https”: Specifies the communication scheme with Elasticsearch as HTTPS.
  • ssl_verify false: Disables SSL verification for simplicity. In production, it’s recommended to set this to true.
  • ssl_version “TLSv1_2”: Specifies the SSL/TLS version.
  • password dhdiuwododhw: Sets the Elasticsearch password.
  • logstash_format true: Indicates that Fluentd should use Logstash-compatible formatting for log entries.
  • logstash_prefix Fluentd-${$.kubernetes.container_name}: Specifies the Logstash prefix for log entries.
  • <buffer tag, $.kubernetes.container_name>: Configures buffering options

Step 4: Install Fluentd using Helm

This command deploys Fluentd on your Amazon EKS cluster with default settings. You can customize the Fluentd configuration by specifying values in a Helm values file or using command-line flags during installation.

Step 5: Verify Fluentd SetupCheck the Fluentd pods’ status


Effective logging plays a crucial role in modern application management on Amazon EKS. Fluentd simplifies log collection, processing, and analysis for organizations when properly integrated. This guide outlines the steps to create an efficient logging system, improving visibility and enabling swift troubleshooting for containerized applications on Amazon EKS.

Drop a query if you have any questions regarding Fluentd on Amazon EKS, and we will get back to you quickly.

Making IT Networks Enterprise-ready – Cloud Management Services

  • Accelerated cloud migration
  • End-to-end view of the cloud environment
Get Started

About CloudThat

CloudThat is an official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, Amazon QuickSight Service Delivery Partner, AWS EKS Service Delivery Partner, and Microsoft Gold Partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best-in-industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.

To get started, go through our Consultancy page and Managed Services PackageCloudThat’s offerings.


1. What is Fluentd, and why is it used in Amazon EKS?

ANS: – Fluentd is an open-source log collector and aggregator. It’s used in Amazon EKS to efficiently collect, process, and send logs from containers and pods to destinations like Elasticsearch, making log management and analysis easier.

2. How can I customize Fluentd to filter specific logs in Amazon EKS?

ANS: – You can customize Fluentd by editing its configuration file before deployment. In the configuration, you can specify filters to ignore or process logs from specific namespaces, providing fine-grained control over log collection and analysis.

WRITTEN BY Dharshan Kumar K S

Dharshan Kumar is a Research Associate at CloudThat. He has a working knowledge of various cloud platforms such as AWS, Microsoft, ad GCP. He is interested to learn more about AWS's Well-Architected Framework and writes about them.



    Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!