Before going to the procedure, let us briefly talk about the packages we will be using. The best part is, all of them are free.
Filebeat: This is a data shipper. It is a watcher to our log files. If anything new comes to our logs, it will transport the log to Logstash for processing.
Logstash: This will accept logs from Filebeat, processes/transforms it and feeds the output to Elasticsearch for indexing.
Elasticsearch: This is a database which will store our logs from Logstash.
Kibana: It is a visualization tool for our Elasticsearch data. It lets you query on the data, build graphs and a lots of other fancy stuffs.
Elastalert: And finally, this is an alert/notification system. You can configure it to watch if there is any change to Elasticsearch data on your patterns of interest and send alert messages via email, slack and many others.
Here is a good picture I found that describes this:

The above packages will be the services that we will be running continuously in our system. So, make sure to download and install the packages before continuing further. We will look on configuring them soon.
On our Node JS server, we need to setup a logger that logs to a file. You can choose any logger you want like Winston, Bunyan or anything. In this article, I will be using Bunyan.
By default, the Bunyan log line will be something similar to this:{“name”:”myapp”,”hostname”:”DESKTOP-LS5NT1L”,”pid”:8712,”level”:50,”msg”:”This is an error!”,”time”:”2019–07–08T14:40:21.220Z”,”v”:0}
I needed to show this because we will be processing the above output in Logstash. So, this will be a reference.
Ok, this completes the Node JS part. That’s it. No more. Basically, there will be similar lines like above in a file(s) which we will be watching using Filebeat.
Now, we will be configuring the packages we downloaded earlier.
Note: We are setting up everything in localhost. So we are just adding the minimal configurations needed. We are assuming the default port configurations for all the services, thus, we are not changing any of them. If this is not your case, make sure to tweak the configurations such as hosts, ports and ssl in the files accordingly.
Filebeat: We will configure it to watch the log we generated. Inside your filebeat package, edit filebeat.yaml. Under the inputs section, make your configuration similar to the following:enabled: true
paths:
- /path/to/your/logs # this is your logs path to watch
Now, under the output section, comment out the Elasticsearch output and enable Logstash output.# output.elasticsearch:
# hosts: [“localhost:9200”]output.logstash:
hosts: [“localhost:5044”]
Then fire up the Filebeat using:cd path/to/filebeat/dir
filebeat -c filebeat.yaml -e
That’s it for Filebeat. Now it is watching our logs.
Logstash: For simplicity, in the /path/to/logstash/bin/dir, create a file called logstash.conf and put the following contents:
input {beats {port => 5044}}filter {json {source => “message”target => “message”}translate {field => “[message][level]”destination => “[message][level]”dictionary => {“10” => “trace”“20” => “debug”“30” => “info”“40” => “warn”“50” => “error”“60” => “fatal”}override => true}}output {elasticsearch {hosts => [“http://localhost:9200”]index => “filebeat”}stdout { codec => rubydebug }}
The configuration is fairly simple. We are inputting from Filebeat and outputting to Elasticsearch. In between, I am using json and translate filters, which you may not need if your log is different from mine. Here, I am just parsing the json and mapping the message level values to their corresponding string notations. You may be using other filters such as grok. The key point of a filter is to transform the data according to your needs. Now fire up the Logstash using:cd path/to/logstash/bin/dir
logstash -f logstash.conf
We are done for Logstash as well.
Elasticsearch, Kibana: These two packages need no further configuration. So just fire them up.
Now we will be checking if everything is working fine. Go on, throw an error in your Node JS app and let the logger log to your file. This log should be now tracked by the Filebeat, pushed to the Logstash and finally to the Elasticsearch for indexing. If you go the the localhost:9200/filebeat/_search in your browser, you should see that your log has been indexed! Don’t be in a haste, it may take a few seconds for your data to show in the Elasticsearch.
Now we can query this data in Kibana for neat processing and visualizations. This is really helpful for filtering your data, by say, the log levels to see which of them are fatal errors, which are warnings and so on.

fig. Filtering the log by message level in Kibana
This completes the Kibana part as well. You can go further and create more filters, visualizations, etc.
Elastalert: Now we come to the sweet, alert system. In this article, we will setup alert for slack. However, you can customize it for things like email and so on. Inside your Elastalert package, make a folder called rules. This is where we will store our settings for different alerts. Also, create a copy of the file config.yaml.example in the same directory and rename it to config.yaml. We will change this file for base level configurations for Elastalert. Now open config.yaml and make it look similar to the following:rules_folder:
rulesrun_every:minutes: 1es_host: localhostes_port: 9200
You can leave other configurations as it is. We are configuring to check the Elasticsearch every 1 minute and run our rules in the rules folder if anything
has happened. If our rules give any match, it will trigger our configured alert.
Now inside the rules folder, create a file called slack.yaml. Put the following contents in there:
name: Slack rule
type: frequency
index: filebeat
num_events: 1
timeframe:
minutes: 1
filter:
- term:
“message.level”: “error”
alert:
- “slack”
slack_webhook_url: “<webhook-url-of-the-slack-channel>”
slack_channel_override: “#<channel-name>”
slack_username_override: “@<user-name>”
Here we are sending slack alert if we have any message in our logs which has a level of error. Now start the Elastalert service using:cd /path/to/Elastalert/dir
elastalert --verbose
And that is it, guys. Now we have a complete logger system with alert service. I hope this helped you. Thanks!
WRITTEN BY
No comments:
Post a Comment