Logo
  • Products
    • SecOps Studio
  • Solutions
    • Pega Stack Shifter
  • Services
    • Pega Enablement
    • Pega Modernization
  • About
  • Contact
  • Blog
Code Vault

ELK Monitoring – Part 5 – Setup Logstash

December 10, 2024 Code Vault Curators

In this blog article, we will set up a Logstash server in our local machine. We will also configure filebeat to ship the log files to logstash and logstash will stash those log file entries into elasticsearch server.

For a better understanding, it is recommended to go through the blog articles on ELK topic in order from 1-6.

https://myknowtech.com/tag/elk

What is Logstash?

– Open source server-side data processor

– Use pipeline that can receive input data from multiple sources, transform it and send it any type of stash or data engine.

The main work of Logstash is Parsing the incoming data, Identifying the fields that enrich the data dynamically, and sends out to any stash.

The pipeline can use a variety of plugins to perform the stashing operation. There are three stages in the pipeline supported by three plugin categories.

  1. Input plugins – Enable specific sources of input events to be read by Logstash
  2. Filter plugins – Enable the intermediate processing of the event
  3. Output plugins – Sends the event to a particular destination

We will start with the configuration.

How to setup Logstash in Windows?

Step 1: Download the Logstash binaries

https://www.elastic.co/downloads/logstash

Step 2: Unzip and install the binaries on the local machine.

Step 3: Setup some important configurations

The main configuration files are – logstash.yml, pipelines.yml, jvm.options and log4j2.properties

Logstash.yml file holds all the default necessary configurations. Please look on your own in the description. In this article, I will concentrate more on setting up the Logstash pipeline.

Logstash works in conjunction with the pipeline. We need to first setup a configuration file for the pipeline

Step 3.1: Configure filebeat output to ship to logstash instead of elastic server.

Update filebeat.yml and restart filebeat.

Step 3.2: Create a new file pega-app.conf and place it in the logstash home directory

Tip: Since you may have only one logstash server for one to many applications, it is a best practice to have separate conf file for each systems. Pipelines.yml file support multi-pipeline 🙂

Remember the pipeline format holds three blocks – Input, Filter, Output

Input {

}

Filter {

}

Output {

}

For now, let’s keep the input on beats port – 5044;

No filters

For unit testing, let’s keep the output – to standard output instead of elastic server.

Step 3.3: Now start the logstash to see if we are getting the shipped log files in the stdout

Use the below command to use the newly created conf file

bin/logstash -f pega-app.conf –config.reload.automatic

Yes, we got the log entries in the logstash.

Unit testing is done now!

Step 3.4: Update the pega-app.conf file to output to elasticsearch server.

Index naming convention is pega-app-*

Step 3.5: Update the pipeline.yml file to use the pega-app conf file

Note: This allows an option to use multi-pipeline settings. You can add more than one pipeline setting in your logstash server.

Add the pipeline ID and pipeline config file path. It is an array.

Save the configurations.

Step 4: Start the Logstash

Open the Windows Powershell and switch to Logstash home directory

.binlogstash

You should see it started successfully.

Make sure, filebeat and elastic server are up and running

Let’s do the verification.

Login to Kibana and start checking the index pattern.

You see the right format was created.

You can use the Discover tab to search on the Index on your own!

Let’s enrich the elastic documents with a few additional fields like system name or environment name

There are two ways to do that.

1. Add additional fields in filebeat.yml, that runs on the same machine as pega host machine.

2. You can use filter plugin in logstash pipeline to add more data

a) Add additional fields in filebeat.yml

Add env and system-name.

Important note: In production, always use environment variables from system

Save and restart filebeat.

Now let’s check in Kibana. Open Kibana and check the recent log file entry.

It came good at the first attempt 😝 Happy me!

b) Filter plugin in logstash

There are many filter plugins available to enrich the fields in the elastic document.

One interesting plugin is mutate, where you can add fields.

https://www.elastic.co/guide/en/logstash/current/filter-plugins.html

Add the mutate plugin in the custom pipeline conf file – pega-app.conf

Now restart the logstash.

Open Kibana and check the logfile entry.

Done.

As a summary for this post,

– We successfully setup Logstash in the local machine

– Filebeat is updated to ship the log files into Logstash and output of logstash is configured to elastic

– We added custom fields from the filebeat.yml file

– Pipeline is setup with a filter – mutate plugin, which also can add custom fields into elastic documents.

One last article is pending in Kibana series, to set up dashboard.

 

  • system-administration
Code Vault Curators

A technical team dedicated to empowering the Pega ecosystem with in-depth knowledge, guided by Premkumar Ganesan's vision.

Post navigation

Previous
Next

Pega Courses

Pega courses can be accessed at https://myknowacademy.com

Search through the blog

Tags

activity authentication background-processing case-management data-model declarative-processing email-processing file-processing pega-core-concepts pega-integration process reporting security system-administration user-interface validation

Categories

  • Code Vault

Recent posts

  • Service REST – Usage and Configurations in Pega
  • Queue processor – Usage and Configurations
  • Data Pages Usage & Configurations in Pega
  • Requestor types in Pega
  • Case Locking Mechanism in Pega

Related Articles

Code Vault

Queue processor – Usage and Configurations

December 18, 2024 Code Vault Curators

In this post we will see the architecture behind queue processor rule and all its configurations and usages. This tutorial is implemented using Pega personal edition 8.4, but the core concepts remain the same in higher versions as well What is a queue processor rule? – Used for background processing that makes use of queue […]

Code Vault

Requestor types in Pega

December 11, 2024 Code Vault Curators

In this blog article, we will see about different requestor types in Pega. This article is implemented using Pega Infinity ’24 version. First, let’s start with understanding the term – Requestor. What is a requestor? From the name, we can say that it can be any people or object which requests for a service. From […]

Code Vault

Property Optimization – Expose Columns in Pega

December 10, 2024 Code Vault Curators

In this blog article we will check the different ways through which we can expose properties in database table. First let’s see how the data (for example – Case data) gets saved into database table. We know Pega uses properties to hold the data. Say for example, I have created a new amazon sales case, […]

Code Vault

Docker – Part 2 – Setup Docker Desktop in Windows

December 10, 2024 Code Vault Curators

In this blog article, we will see how we can set up the docker desktop in Windows 10 home edition. You can visit previous blog article on containerisation basics. Tip: Since copy/paste is not possible in this article, I documented the shell commands in the following git repository, you can easily copy/paste from there. https://github.com/Prem1991/myknowpega/tree/master/docker […]

About

MyKnowTech was born with a mission to bridge the gap between technical expertise and business needs. We are a boutique firm specializing in Pega solutions, delivering them with a personal touch. At the heart of our philosophy is a commitment to putting clients first.

Company
  • About
  • Leadership
  • Career
  • Contact
Resources
  • Blog
  • Services
  • Solutions
  • Insights

©  MyKnowTech B.V. All Rights Reserved.

  • Sitemap
  • Terms & Conditions
  • Privacy Policy