{"id":5145,"date":"2016-11-22T08:13:48","date_gmt":"2016-11-22T08:13:48","guid":{"rendered":"http:\/\/23.23.12.23:8085\/?p=5145"},"modified":"2024-06-25T11:12:21","modified_gmt":"2024-06-25T11:12:21","slug":"lagging-behind-because-of-logs-elk-stack-to-the-rescue","status":"publish","type":"blog","link":"https:\/\/www.cloudthat.com\/resources\/blog\/lagging-behind-because-of-logs-elk-stack-to-the-rescue","title":{"rendered":"Lagging Behind Because of Logs? ELK Stack to the Rescue!"},"content":{"rendered":"<p>One of the common mistakes done by most of the professionals is not using valuable data called &#8216;Logs&#8217;. Because of the quantity of logs generated, the chances of using them becomes very less. Logs are used only to debug in case of failure or issues, but it can be used for much more<\/p>\n<p>For Example:<\/p>\n<ul>\n<li>Monitor processes<\/li>\n<li>Finding the root cause of the issue being faced<\/li>\n<li>Analyze flow and performance of processes and many more<\/li>\n<\/ul>\n<p>The collections and analyzing of the log becomes extremely difficult because of the diversity generated. For example we have access logs, error logs, application logs etc. which are associated with an application or a server.<\/p>\n<p>In this blog, I will be demonstrating how to install and configure ELK Stack.<br \/>\nELK stands for: <strong>E<\/strong>lasticsearch, <strong>L<\/strong>ogstash and <strong>K<\/strong>ibana.<\/p>\n<p><strong>Before we begin, let\u2019s have a quick overview of the overall architecture with their components, followed by the implementation procedure.<\/strong><br \/>\n<strong>\u00a0<\/strong><\/p>\n<h3><strong>Architecture of ELK Stack:<\/strong><\/h3>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/blog-archi1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5146 size-large\" src=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/blog-archi1-1024x451.png\" alt=\"blog-archi\" width=\"940\" height=\"414\" \/><\/a><\/p>\n<ol>\n<li>ElasticSearch:\n<ul>\n<li>It is an Indexing, Storage and Retrieval engine<\/li>\n<li>Powerful open-source full-text search library<\/li>\n<li>A Document is the unit of search and index<\/li>\n<li>Fast search against large volumes<\/li>\n<li>De-normalized document storage: Fast, direct access to the data<\/li>\n<li>Broadly distributed and highly scalable<\/li>\n<\/ul>\n<\/li>\n<li>Logstash:\n<ul>\n<li>Log input slicer and dicer and output writer<\/li>\n<li>Centralize Data Processing of all types<\/li>\n<li>Normalize Varying Schema<\/li>\n<li>Extend to Custom Log Formats<\/li>\n<\/ul>\n<\/li>\n<li>Kibana:\n<ul>\n<li>Data Visualizer<\/li>\n<li>Kibana is an open source data visualization plugin for ElasticSearch<\/li>\n<li>Smooth integration with ElasticSearch<\/li>\n<li>Give shape to the artifacts<\/li>\n<li>Sophisticated Analytics<\/li>\n<li>Flexible Interface<\/li>\n<li>Visualize Data from different sources<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><strong>Working:<\/strong><br \/>\nThe ELK stack architecture is very simple and clearly specifies the flow of the process.Various logs from different locations will be pulled by the Logstash (If you install Nginx for allowing external access then the logs will go to Nginx first), it will process the logs.<br \/>\nLogstash is the center where all the logs are processed and differentiated. Logs are then pushed to ElasticSearch, which is a Retrieval engine, it will index all the logs as per index pattern and will store it to be further accessed by Kibana.<br \/>\nKibana is a Web UI through which we will do all the activities such as visualizing and analyzing, creating index patterns, etc.<\/p>\n<p><strong>Prerequisites:<\/strong><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>OS: Ubuntu 14.04<\/li>\n<li>RAM: 4GB<\/li>\n<li>CPU: 2<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3><strong>Making ELK Stack Up and Running:<\/strong><\/h3>\n<p><strong>Step 1: Launching EC2 Instance and all Installations<\/strong><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Go to AWS console and launch a <strong>t2.medium(recommended)<\/strong> type of instance so that all three services can run in same instance<\/li>\n<li>Login to the instance (or) if you are not going with AWS EC2, then you can do it in your local machine as well<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>\u00a0Install Java 8<\/strong><\/p>\n<pre class=\"lang:default decode:true \">sudo add-apt-repository -y ppa:webupd8team\/java\r\nsudo apt-get update\r\nsudo apt-get -y install oracle-java8-installer\r\n<\/pre>\n<ul>\n<li>Install ElasticSearch, Logstash and Kibana on it<\/li>\n<\/ul>\n<p><strong>Install ElasticSearch\u00a0<\/strong><\/p>\n<pre class=\"lang:default decode:true\">sudo wget -qO - https:\/\/packages.elastic.co\/GPG-KEY elasticsearch | sudo apt-key add \u2013\r\necho \"deb https:\/\/packages.elastic.co\/elasticsearch\/2.x\/debian stable main\" | sudo tee -a \/etc\/apt\/sources.list.d\/elasticsearch-2.x.list\r\nsudo apt-get update\r\nsudo apt-get -y install elasticsearch\r\nsudo service elasticsearch restart\r\ncurl localhost:9200\r\nsudo update-rc.d elasticsearch defaults 95 10\r\n<\/pre>\n<p><strong>\u00a0Install Logstash\u00a0<\/strong><\/p>\n<pre class=\"lang:default decode:true \">echo \"deb https:\/\/packages.elasticsearch.org\/logstash\/1.5\/debian stable main\" | sudo tee -a \/etc\/apt\/sources.list\r\nsudo apt-get update\r\nsudo apt-get install logstash\r\nsudo update-rc.d logstash defaults 97 8\r\nsudo service logstash start\r\nsudo service logstash status\r\n<\/pre>\n<p><strong>Install Kibana\u00a0<\/strong><\/p>\n<pre class=\"lang:default decode:true \">wget https:\/\/download.elastic.co\/kibana\/kibana\/kibana-4.1.1-linux-x64.tar.gz\r\ntar -xzf kibana-4.1.1-linux-x64.tar.gz\r\ncd kibana-4.1.1-linux-x64\/\r\nsudo mkdir -p \/opt\/kibana\r\nsudo mv kibana-4.1.1-linux-x64\/* \/opt\/kibana\r\ncd \/etc\/init.d &amp;&amp; sudo wget https:\/\/raw.githubusercontent.com\/akabdog\/scripts\/master\/kibana4_init -O kibana4\r\nsudo chmod +x \/etc\/init.d\/kibana4\r\nsudo update-rc.d kibana4 defaults 96 9\r\nsudo service kibana4 start\r\n<\/pre>\n<p><strong>Step 2: Configurations<\/strong><br \/>\n<strong>\u00a0Configure Logstash<\/strong>:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>We need to redirect the logs to logstash, such as system logs or any other logs<\/li>\n<li>Here, we will redirect the system logs here<\/li>\n<li>Create a file where we will do write the configurations at the location \/etc\/logstash\/conf.d\/demo-logs.conf<\/li>\n<li>Put the following code to it and save it<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">input {\r\nfile {\r\ntype =&gt; \"syslog\"\r\npath =&gt; [ \"\/var\/log\/messages\", \"\/var\/log\/*.log\",\"\/var\/log\/httpd\/access_log\",\"\/var\/log\/httpd\/error_log\" ]\r\n}\r\n}\r\noutput {\r\nstdout {\r\ncodec =&gt; rubydebug\r\n}\r\nif ([program] == \"logstash\" or [program] == \"elasticsearch\" or [program] == \"nginx\") and [environment] == \"production\" \r\n{\r\n   elasticsearch {\r\n      host =&gt; \"localhost\"\r\n      index =&gt; \"httpd-%{*}\"\r\n    }\r\n  }\r\nelse {\r\nelasticsearch {\r\nhost =&gt; \"localhost\" # Use the internal IP of your Elasticsearch server for production\r\n}}}\r\nfilter {\r\n  if [type] == \"syslog\" {\r\n    grok {\r\n      match =&gt; { \"message\" =&gt; \"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{COMBINEDAPACHELOG} %{DATA:syslog_program}(?:\\[%{POSINT:sy$\r\n      add_field =&gt; [ \"received_at\", \"%{@timestamp}\" ]\r\n      add_field =&gt; [ \"received_from\", \"%{host}\" ]\r\n    }\r\n    syslog_pri { }\r\n    date {\r\n      match =&gt; [ \"syslog_timestamp\", \"MMM  d HH:mm:ss\", \"MMM dd HH:mm:ss\" ]\r\n    }}}\r\n<\/pre>\n<ul>\n<li>Now, save the file and restart all the services<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">sudo service elasticsearch restart<\/pre>\n<pre class=\"lang:default decode:true \">sudo service kibana4 restart<\/pre>\n<pre class=\"lang:default decode:true\">sudo service logstash restart\r\n<\/pre>\n<p><strong>NOTE<\/strong>: This will make Kibana accessible to instance_ip only. If we want to allow external access, then need to use Nginx as reverse proxy.<\/p>\n<p><strong>To allow external access following are the steps to configure with Nginx<\/strong><\/p>\n<ul>\n<li>Install Nginx<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">sudo service kibana4 restart<\/pre>\n<ul>\n<li>Create an admin user to access Kibana dashboard<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">sudo htpasswd -c \/etc\/nginx\/htpasswd.users kibadmin<\/pre>\n<p>This will prompt for a password that you will need to access Kibana dashboard along with kibadmin user<\/p>\n<ul>\n<li>Open the nginx default server block and replace the whole content with the following code<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">sudo vi \/etc\/nginx\/sites-available\/default<\/pre>\n<pre class=\"lang:default decode:true\">server {\r\nlisten 80;\r\nserver_name example.com;\r\n  auth_basic \"Restricted Access\";\r\nauth_basic_user_file \/etc\/nginx\/htpasswd.users;\r\nlocation \/ {\r\nproxy_pass https:\/\/localhost:5601;\r\n   proxy_http_version 1.1;\r\nproxy_set_header Upgrade $http_upgrade;\r\n   proxy_set_header Connection 'upgrade';\r\n   proxy_set_header Host $host;\r\n   proxy_cache_bypass $http_upgrade;        \r\n}\r\n}\r\n<\/pre>\n<p>This configuration will make nginx to direct the server\u2019s HTTP traffic to kibana which is listening on localhost:5601. This will enable to access kibana dashboard with elasticsearch server\u2019s public ip.<br \/>\nRestart nginx to apply changes that we made<\/p>\n<pre class=\"lang:default decode:true \">sudo service nginx restart<\/pre>\n<p><strong>Step 3: Access Kibana Dashboard<\/strong><\/p>\n<ul>\n<li>If the configuration is for localhost then type <strong>instance-ip-address:5601<\/strong> on the web browser, this will open the kibana dashboard.<\/li>\n<\/ul>\n<ul>\n<li>And if the configuration is for external access then type the public IP of elasticsearch server such as <a href=\"https:\/\/elasticsearch-server-public_ip\/\"><strong>https:\/\/elasticsearch-server-public_ip\/<\/strong><\/a><\/li>\n<\/ul>\n<p><a href=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-start1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5147 size-full\" src=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-start1.png\" alt=\"kibana-start\" width=\"693\" height=\"527\" \/><\/a><\/p>\n<p>This is the dashboard that we will get.<\/p>\n<p><a href=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-new-log1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5148 size-full\" src=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-new-log1.png\" alt=\"kibana-new-log\" width=\"1278\" height=\"631\" \/><\/a><\/p>\n<p>In this way, we get the logs.But, there are many options to view logs in different formats and to filter them.<\/p>\n<p><a href=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-piechart1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5149 size-full\" src=\"https:\/\/content.cloudthat.com\/resources\/wp-content\/uploads\/2022\/11\/kibana-piechart1.png\" alt=\"kibana-piechart\" width=\"1279\" height=\"677\" \/><\/a><\/p>\n<p>Here, you can see the logs of nginx. There are many such options on the Kibana dashboard that you can explore.<\/p>\n<h3><strong>Conclusion:<\/strong><\/h3>\n<p>Implementing ELK Stack will provide you with the following benefits:<\/p>\n<ol>\n<li>Simple and quick way to manage logs<\/li>\n<li>Easy analysis of logs<\/li>\n<li>Deep dive into logs (based on timestamp)<\/li>\n<li>Various types of forms to view logs (bar chart, pie diagram, etc.)<\/li>\n<\/ol>\n<p>You just need to create index pattern as per your need and you are ready to go.<\/p>\n<p>Feel free to ask your questions below and I will get back to you on them.<\/p>\n<p>Need professional assistance or consulting services for your ELK Stack project? Kindly visit\u00a0<a href=\"https:\/\/cloudthat.com\/consulting\/cloud-strategy\/\" target=\"_blank\" rel=\"noopener\">here<\/a> \u00a0Please comment and share if you liked the article.<\/p>\n","protected":false},"author":219,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","template":"","blog_category":[3607],"user_email":"prarthitm@cloudthat.com","published_by":"324","primary-authors":"","secondary-authors":"","acf":[],"_links":{"self":[{"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/blog\/5145"}],"collection":[{"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/blog"}],"about":[{"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/types\/blog"}],"author":[{"embeddable":true,"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/users\/219"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/comments?post=5145"}],"version-history":[{"count":1,"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/blog\/5145\/revisions"}],"predecessor-version":[{"id":41895,"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/blog\/5145\/revisions\/41895"}],"wp:attachment":[{"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/media?parent=5145"}],"wp:term":[{"taxonomy":"blog_category","embeddable":true,"href":"https:\/\/www.cloudthat.com\/resources\/wp-json\/wp\/v2\/blog_category?post=5145"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}