7 min read
This is a quick tutorial on how to set up logging of Magento’s log files using the ELK stack. ELK stands for Elasticsearch, Logstash and Kibana. I won’t go too far into detail but basically Elasticsearch is used for storage and quick retrieval of log entries, Logstash is responsible for getting the data into Elasticsearch and Kibana is used to create overviews and visualizations of the big bulk of log entries.
Besides ELK, we’ll use the command line tool log-courier to push the logs straight from our production server to the ELK stack using an encrypted connection.
Note; we’ll be using log-courier 1.8.3 since that is the current version installed on Byte’s Hypernode hosting solution (which we use for all our clients). The latest version right now is 2.0.5 and offers a bit more features, especially in the lc-admin tool.
A lot of my research on how to set this up with taken from this Gist by Github user purinda, DigitalOcean’s blog about ELK and log-courier’s documentation.
DigitalOcean used to offer a one-click app for the ELK stack. Unfortunately this is no longer available. I’ll leave it to you to set up your own ELK stack. See the blog from DO linked above or pick a Docker image like this one. You can also try mailing DigitalOcean to see whether they have an image laying around for you. Just skip the Filebeat part of the DO blog; we’ll be using log-courier to get our logs into ELK.
Kibana and Elasticsearch are pretty straightforward, just follow the blog. Remember where you put your certificate file and your secret key file for Logstash.
A Logstash configuration at its minimum exists out of three parts; input, output and filter.
The input configuration file defines how the logs enter Logstash. In our case, we’ll be using log-courier so we need the logstash-input-courier plugin for Logstash;
Now we’re going to set up the input;
input { courier { port => 5043 ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
You can read more about setting up Logstash in log-courier here.
Logstash needs to know where to send the data it receives. Since we’re using ELK, Elasticsearch is the output. In this configuration, we assume it’s running on the same machine (localhost) and on port 9200.
output { elasticsearch { hosts => ["localhost:9200"] manage_template => false document_type => "%{[@metadata][type]}" } }
The third element is a filter; the data coming in needs to be understood by Logstash, so we need to tell it in which format it can expect data. This is done by using a regex-like syntax called grok. I’ve written two groks (for Magento 1 and Magento 2). There is a great tool to test your groks; Grok Constructor Matcher.
filter { if [type] == "magento2" { grok { match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA:log_level}: %{GREEDYDATA:message}"} add_field => [ "received_at", "%{@timestamp}" ] } } if [type] == "magento1" { grok { match => { "message" => "%{TIMESTAMP_ISO8601:date} %{DATA:log_level} \([0-9]+\): %{GREEDYDATA:message}"} add_field => [ "received_at", "%{@timestamp}" ] } } }
Note; this is just for system.log and does not take into account multi-line log entries. You can find a multi-line log entry grok for Magento here.
Test your Logstash configuration with service logstash configtest. If you see Configuration OK, run service logstash restart. Check whether it is listening on port 5034 with sudo lsof -i:5043. It should say something like;
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 2737 logstash 16u IPv6 54468 0t0 TCP *:5043 (LISTEN)
Now we need to set up your production server to send data to Logstash.
{ "general": { "admin enabled": true }, "network": { "servers": [ "ELK-IP-ADDRESS:5043" ], "ssl ca": "/absolute/path/to/your/log-courier/logstash.cer" }, "files": [ { "paths": [ "/path/to/magento2/var/log/*.log" ], "fields": { "type": "magento2" } } ] }
Note: the admin tool helps you to figure out what’s going on when something does not work. See this Github page to see what it can do and how it works.
I’ll briefly touch upon configuring Kibana because there is a plethora of information out there about how you can configure it and it largely depends on your usecase.
If you don’t see logs coming in, be sure there are logs by checking the configured directory on your production server. If that is the case, SSH into your ELK stack and run sudo lsof -i:5043. If there is a connection set up from your production server to your ELK stack, you should see something like this;
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 2737 logstash 16u IPv6 54468 0t0 TCP *:5043 (LISTEN) java 2737 logstash 41u IPv6 55003 0t0 TCP ELK-IP-ADDRESS:5043->ELK-IP-ADDRESS:40176 (ESTABLISHED) java 2737 logstash 45u IPv6 54474 0t0 TCP ELK-IP-ADDRESS:5043->ELK-IP-ADDRESS:56348 (ESTABLISHED)
You can run the same command for ports 9200 and 5601 to check Elasticsearch and Kibana, respectively.
If there isn’t a connection, check to see whether Logstash is actually running with ps aux | grep -i logstash. If it’s not running, check Logstash’s error and log files in /var/log/logstash/logstash.err and /var/log/logstash/logstash.log.
If there are logs and there is a connection and no logs are showing up in Kibana, you can check these things;
Since Hypernode is a managed hosting solution, we have no way to edit the default configuration. We have to start log-courier manually. This also means that it is not auto-started on a possible system reboot. That’s why we need to set up a cron to make sure log-courier runs.
Good luck!