ELK Monitoring – Part 5 – Setup Logstash
In this blog article, we will set up a Logstash server in our local machine. We will also configure filebeat to ship the log files to logstash and logstash will stash those log file entries into elasticsearch server.
For a better understanding, it is recommended to go through the blog articles on ELK topic in order from 1-6.
https://myknowtech.com/tag/elk
What is Logstash?
– Open source server-side data processor
– Use pipeline that can receive input data from multiple sources, transform it and send it any type of stash or data engine.
The main work of Logstash is Parsing the incoming data, Identifying the fields that enrich the data dynamically, and sends out to any stash.
The pipeline can use a variety of plugins to perform the stashing operation. There are three stages in the pipeline supported by three plugin categories.
- Input plugins – Enable specific sources of input events to be read by Logstash
- Filter plugins – Enable the intermediate processing of the event
- Output plugins – Sends the event to a particular destination
We will start with the configuration.
How to setup Logstash in Windows?
Step 1: Download the Logstash binaries
https://www.elastic.co/downloads/logstash
Step 2: Unzip and install the binaries on the local machine.
Step 3: Setup some important configurations
The main configuration files are – logstash.yml, pipelines.yml, jvm.options and log4j2.properties
Logstash.yml file holds all the default necessary configurations. Please look on your own in the description. In this article, I will concentrate more on setting up the Logstash pipeline.
Logstash works in conjunction with the pipeline. We need to first setup a configuration file for the pipeline
Step 3.1: Configure filebeat output to ship to logstash instead of elastic server.
Update filebeat.yml and restart filebeat.
Step 3.2: Create a new file pega-app.conf and place it in the logstash home directory
Tip: Since you may have only one logstash server for one to many applications, it is a best practice to have separate conf file for each systems. Pipelines.yml file support multi-pipeline 🙂
Remember the pipeline format holds three blocks – Input, Filter, Output
Input {
}
Filter {
}
Output {
}
For now, let’s keep the input on beats port – 5044;
No filters
For unit testing, let’s keep the output – to standard output instead of elastic server.
Step 3.3: Now start the logstash to see if we are getting the shipped log files in the stdout
Use the below command to use the newly created conf file
bin/logstash -f pega-app.conf –config.reload.automatic
Yes, we got the log entries in the logstash.
Unit testing is done now!
Step 3.4: Update the pega-app.conf file to output to elasticsearch server.
Index naming convention is pega-app-*
Step 3.5: Update the pipeline.yml file to use the pega-app conf file
Note: This allows an option to use multi-pipeline settings. You can add more than one pipeline setting in your logstash server.
Add the pipeline ID and pipeline config file path. It is an array.
Save the configurations.
Step 4: Start the Logstash
Open the Windows Powershell and switch to Logstash home directory
.binlogstash
You should see it started successfully.
Make sure, filebeat and elastic server are up and running
Let’s do the verification.
Login to Kibana and start checking the index pattern.
You see the right format was created.
You can use the Discover tab to search on the Index on your own!
Let’s enrich the elastic documents with a few additional fields like system name or environment name
There are two ways to do that.
1. Add additional fields in filebeat.yml, that runs on the same machine as pega host machine.
2. You can use filter plugin in logstash pipeline to add more data
a) Add additional fields in filebeat.yml
Add env and system-name.
Important note: In production, always use environment variables from system
Save and restart filebeat.
Now let’s check in Kibana. Open Kibana and check the recent log file entry.
It came good at the first attempt 😝 Happy me!
b) Filter plugin in logstash
There are many filter plugins available to enrich the fields in the elastic document.
One interesting plugin is mutate, where you can add fields.
https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
Add the mutate plugin in the custom pipeline conf file – pega-app.conf
Now restart the logstash.
Open Kibana and check the logfile entry.
Done.
As a summary for this post,
– We successfully setup Logstash in the local machine
– Filebeat is updated to ship the log files into Logstash and output of logstash is configured to elastic
– We added custom fields from the filebeat.yml file
– Pipeline is setup with a filter – mutate plugin, which also can add custom fields into elastic documents.
One last article is pending in Kibana series, to set up dashboard.