Frits Stegmann
Save your Laravel logs in Elasticsearch

Setup PostgreSQL

Install

apt update && apt -y dist-upgrade && \
apt install -y postgresql
sudo -u postgres psql
CREATE ROLE "app" WITH LOGIN PASSWORD 'app';
CREATE DATABASE app OWNER "app";

Setting up Laravel instance

Install

apt install -y git nginx php7.4-fpm php7.4-cli php7.4-xml php7.4-mbstring php7.4-zip php7.4-pgsql

Install Composer version 2

cd ~ && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php -r "if (hash_file('sha384', 'composer-setup.php') === '756890a4488ce9024fc62c56153228907f1545c228516cbf63f885e036d37e9a59d27d63f46af1d4d07ee0f76181c7d3') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" && \
php composer-setup.php --install-dir=/usr/bin --filename=composer && \
php -r "unlink('composer-setup.php');"

Create application user

adduser --system --shell /bin/bash --group --disabled-password --no-create-home app

Install Laravel into folder

composer create-project laravel/laravel /var/www/app && \
chown app:www-data -R /var/www/app

Setup BELG Stack

APT Repo

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - && \
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list && \
apt update && \
apt install -y filebeat elasticsearch logstash

Configuration

Update /etc/elasticsearch/elasticsearch.yml, change the contents to the following

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1
http.port: 9200
cp /etc/logstash/logstash-sample.conf /etc/logstash/conf.d/logstash.conf

Setup logstash to accept UDP logs

Edit /etc/logstash/conf.d/logstash.conf, Update the contents of the file with the following

input {
  udp {
    codec => "json"
    port => 5055
  }
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "laravel"
  }
}

Restart

systemctl restart elasticsearch && \
systemctl enable elasticsearch && \
systemctl restart logstash && \
systemctl enable logstash && \
systemctl restart filebeat && \
systemctl enable filebeat

Setup Laravel to log to ELK

Update logging settings.

Update /var/www/app/config/logging.php, replace the contents with the following

<?php

use Monolog\Handler\NullHandler;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\SyslogUdpHandler;

return [
    'default' => env('LOG_CHANNEL', 'stack'),
    'channels' => [
        'stack' => [
            'driver' => 'stack',
            'channels' => ['logstash'],
            'ignore_exceptions' => false,
        ],
        'emergency' => [
            'path' => storage_path('logs/laravel.log'),
        ],
        'logstash' => [
                'driver' => 'monolog',
                'handler' => Monolog\Handler\SocketHandler::class,
                'with' => [
                        'connectionString' => 'udp://127.0.0.1:5055',
                ],
                'formatter' => \Monolog\Formatter\LogstashFormatter::class,
                'formatter_with' => [
                        'applicationName' => config('app.name'),
                ],
        ],
    ],
];

Create command to test logging

cd /var/www/app && \
php artisan make:command TestCommand

Update /var/www/app/app/Console/Commands/TestCommand.php, Replace it with the following

<?php

namespace App\Console\Commands;

use Illuminate\Console\Command;

class TestCommand extends Command
{
    protected $signature = 'command:name';
    protected $description = 'Command description';

    public function handle()
    {
        \Log::debug('test', ['user' => 'Me']);
        return 0;
    }
}

Run

php artisan command:name

PostgreSQL and Nginx logs

Update /etc/logstash/conf.d/logstash.conf, Update with file with the following contents

input {
  udp {
    codec => "json"
    port => 5055
  }
  beats {
    port => 5044
  }
}
output {
  if [event][module] == "postgresql" {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "postgresql"
    }
  } else if [event][module] == "nginx" {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "nginx"
    }
  } else {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "laravel"
    }
  }
}

Enable filebeats modules for Nginx and PostgreSQL

cp /etc/filebeat/modules.d/postgresql.yml.disabled /etc/filebeat/modules.d/postgresql.yml && \
cp /etc/filebeat/modules.d/nginx.yml.disabled /etc/filebeat/modules.d/nginx.yml

Update Filebeat config

Update /etc/filebeat/filebeat.yml, Change the contents to the following

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
output.logstash:
  hosts: ["localhost:5044"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded

Enable PostgreSQL slow logs

Create /etc/postgresql/12/main/conf.d/slowlog.conf, Update the following contents

log_min_duration_statement = 1 # we use 1 for test, use something like 1200 for prod
systemctl restart postgresql && \
systemctl status postgresql && \
systemctl restart filebeat && \
systemctl status filebeat

Install Prometheus

Install Server

cd ~ && \
useradd -M -r -s /bin/false prometheus && \
mkdir /etc/prometheus /var/lib/prometheus  && \
curl -s https://api.github.com/repos/prometheus/prometheus/releases/latest | grep browser_download_url | grep linux-amd64 | cut -d '"' -f 4 | wget -qi -  && \
tar xvf prometheus*.tar.gz  && \
cd `ls -1 -I node | grep -v tar | grep prometheus`  && \
cp ./{prometheus,promtool} /usr/local/bin/  && \
chown -R prometheus:prometheus /var/lib/prometheus  && \
chown prometheus:prometheus /usr/local/bin/{prometheus,promtool}  && \
cp -r ./{consoles,console_libraries} /etc/prometheus/  && \
cp ./prometheus.yml /etc/prometheus/

System-D configuration

Create /etc/systemd/system/prometheus.service, add the following contents.

[Unit]
Description=Prometheus Time Series Collection and Processing Server
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target
systemctl daemon-reload && \
systemctl enable prometheus && \
systemctl start prometheus && \
systemctl status prometheus

Install Node Exporter

cd ~ && \
useradd -M -r -s /bin/false node_exporter  && \
wget https://github.com/prometheus/node_exporter/releases/download/v1.1.2/node_exporter-1.1.2.linux-amd64.tar.gz  && \
tar vxzf node_exporter-*  && \
cd `ls -1 -I node | grep -v tar | grep node_exporter`  && \
cp ./node_exporter /usr/local/bin/  && \
chown node_exporter:node_exporter /usr/local/bin/node_exporter

System-D configuration

Create /etc/systemd/system/node_exporter.service, add the following contents.

[Unit]
Description=Prometheus Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
systemctl daemon-reload && \
systemctl enable --now node_exporter && \
systemctl status node_exporter

Update Prometheus

Update /etc/prometheus/prometheus.yml, update the file with the following contents

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

alerting:
  alertmanagers:
  - static_configs:
    - targets:
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'server'
    static_configs:
    - targets: ['localhost:9100']
systemctl restart prometheus && \
systemctl status prometheus

Enable Elasticsearch HTTPS

Setup

apt install -y unzip  && \

# Choose the best settings and save the final file as /root/ssl-http.zip
/usr/share/elasticsearch/bin/elasticsearch-certutil http  && \
cd /root/  && \
unzip ssl-http.zip  && \
cp elasticsearch/http.p12 /etc/elasticsearch/  && \
chown elasticsearch /etc/elasticsearch/http.p12  && \

## if you chose a password add it with this.
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password

Update Elasticsearch configuration

Update /etc/elasticsearch/elasticsearch.yml Add the following lines at the end of the file

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: http.p12

Update Logstash configuration

openssl pkcs12 -in /root/ca/ca.p12 -out /etc/logstash/ca.pem -clcerts -nokeys && \
chown logstash /etc/logstash/ca.pem

Update /etc/logstash/conf.d/logstash.conf, Update the contents with the following

input {
  udp {
    codec => "json"
    port => 5055
  }
  beats {
    port => 5044
  }
}
output {
  if [event][module] == "postgresql" {
    elasticsearch {
      hosts => ["https://localhost:9200"]
      index => "postgresql"
      ssl => true
      cacert => '/etc/logstash/ca.pem'
    }
  } else if [event][module] == "nginx" {
    elasticsearch {
      hosts => ["https://localhost:9200"]
      index => "nginx"
      ssl => true
      cacert => '/etc/logstash/ca.pem'
    }
  } else {
    elasticsearch {
      hosts => ["https://localhost:9200"]
      index => "laravel"
      ssl => true
      cacert => '/etc/logstash/ca.pem'
    }
  }
}
systemctl stop logstash && \
systemctl restart elasticsearch && \
systemctl status elasticsearch && \
systemctl restart logstash && \
systemctl status logstash && \
systemctl restart filebeat && \
systemctl status filebeat

Install Grafana

wget -q -O - https://packages.grafana.com/gpg.key | apt-key add - && \
add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"  && \
apt update  && \
apt install -y grafana  && \
systemctl start grafana-server  && \
systemctl status grafana-server  && \
systemctl enable grafana-server

The default username and password is admin, choose a new password. If you're running it in production make sure it's a good one.

To add CA cert support for Grafana go to /root/ca/ and run openssl pkcs12 -in /root/ca/ca.p12 -info -clcerts -nokeys you can copy paste the certificate to Grafana's datasource setup

{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": 1,
  "links": [],
  "panels": [
    {
      "datasource": "Prometheus",
      "fieldConfig": {
        "defaults": {
          "color": {
            "mode": "thresholds"
          },
          "mappings": [],
          "thresholds": {
            "mode": "absolute",
            "steps": [
              {
                "color": "green",
                "value": null
              },
              {
                "color": "red",
                "value": 80
              }
            ]
          },
          "unit": "decbytes"
        },
        "overrides": []
      },
      "gridPos": {
        "h": 8,
        "w": 8,
        "x": 0,
        "y": 0
      },
      "id": 4,
      "options": {
        "colorMode": "value",
        "graphMode": "area",
        "justifyMode": "auto",
        "orientation": "auto",
        "reduceOptions": {
          "calcs": [
            "lastNotNull"
          ],
          "fields": "",
          "values": false
        },
        "text": {},
        "textMode": "auto"
      },
      "pluginVersion": "8.0.3",
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "field": "@timestamp",
              "id": "2",
              "settings": {
                "interval": "auto"
              },
              "type": "date_histogram"
            }
          ],
          "exemplar": true,
          "expr": "node_memory_MemFree_bytes{instance=\"localhost:9100\"}",
          "interval": "",
          "legendFormat": "",
          "metrics": [
            {
              "id": "1",
              "type": "count"
            }
          ],
          "query": "",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "title": "Memory Free",
      "type": "stat"
    },
    {
      "datasource": "Prometheus",
      "fieldConfig": {
        "defaults": {
          "color": {
            "mode": "thresholds"
          },
          "mappings": [],
          "max": 2,
          "min": 0,
          "thresholds": {
            "mode": "absolute",
            "steps": [
              {
                "color": "green",
                "value": null
              },
              {
                "color": "red",
                "value": 80
              }
            ]
          }
        },
        "overrides": []
      },
      "gridPos": {
        "h": 8,
        "w": 8,
        "x": 8,
        "y": 0
      },
      "id": 6,
      "options": {
        "colorMode": "value",
        "graphMode": "area",
        "justifyMode": "auto",
        "orientation": "auto",
        "reduceOptions": {
          "calcs": [
            "lastNotNull"
          ],
          "fields": "",
          "values": false
        },
        "text": {},
        "textMode": "auto"
      },
      "pluginVersion": "8.0.3",
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "field": "@timestamp",
              "id": "2",
              "settings": {
                "interval": "auto"
              },
              "type": "date_histogram"
            }
          ],
          "exemplar": true,
          "expr": "node_load5",
          "interval": "",
          "legendFormat": "",
          "metrics": [
            {
              "id": "1",
              "type": "count"
            }
          ],
          "query": "",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "title": "Load",
      "type": "stat"
    },
    {
      "datasource": "Prometheus",
      "fieldConfig": {
        "defaults": {
          "color": {
            "mode": "thresholds"
          },
          "mappings": [],
          "thresholds": {
            "mode": "absolute",
            "steps": [
              {
                "color": "green",
                "value": null
              },
              {
                "color": "red",
                "value": 80
              }
            ]
          },
          "unit": "percent"
        },
        "overrides": []
      },
      "gridPos": {
        "h": 8,
        "w": 8,
        "x": 16,
        "y": 0
      },
      "id": 10,
      "options": {
        "colorMode": "value",
        "graphMode": "area",
        "justifyMode": "auto",
        "orientation": "auto",
        "reduceOptions": {
          "calcs": [
            "lastNotNull"
          ],
          "fields": "",
          "values": false
        },
        "text": {},
        "textMode": "auto"
      },
      "pluginVersion": "8.0.3",
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "field": "@timestamp",
              "id": "2",
              "settings": {
                "interval": "auto"
              },
              "type": "date_histogram"
            }
          ],
          "exemplar": true,
          "expr": "100 - ((node_filesystem_avail_bytes{mountpoint=\"/\",fstype!=\"rootfs\"} * 100) / node_filesystem_size_bytes{mountpoint=\"/\",fstype!=\"rootfs\"})",
          "interval": "",
          "legendFormat": "",
          "metrics": [
            {
              "id": "1",
              "type": "count"
            }
          ],
          "query": "",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "title": "Diskspace Free",
      "type": "stat"
    },
    {
      "datasource": null,
      "gridPos": {
        "h": 7,
        "w": 24,
        "x": 0,
        "y": 8
      },
      "id": 2,
      "options": {
        "dedupStrategy": "none",
        "enableLogDetails": true,
        "showLabels": false,
        "showTime": false,
        "sortOrder": "Descending",
        "wrapLogMessage": false
      },
      "targets": [
        {
          "alias": "",
          "bucketAggs": [],
          "metrics": [
            {
              "id": "1",
              "settings": {
                "limit": "500"
              },
              "type": "logs"
            }
          ],
          "query": "_index = \"Laravel\"",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "title": "Application Logs",
      "transformations": [
        {
          "id": "filterFieldsByName",
          "options": {
            "include": {
              "names": [
                "@timestamp",
                "message",
                "level",
                "context.user"
              ]
            }
          }
        },
        {
          "id": "organize",
          "options": {
            "excludeByName": {},
            "indexByName": {
              "@timestamp": 0,
              "level": 2,
              "message": 1
            },
            "renameByName": {
              "context.user": "user"
            }
          }
        }
      ],
      "type": "logs"
    },
    {
      "datasource": null,
      "gridPos": {
        "h": 8,
        "w": 24,
        "x": 0,
        "y": 15
      },
      "id": 8,
      "options": {
        "dedupStrategy": "none",
        "enableLogDetails": true,
        "showLabels": false,
        "showTime": false,
        "sortOrder": "Descending",
        "wrapLogMessage": false
      },
      "pluginVersion": "8.0.3",
      "targets": [
        {
          "alias": "",
          "bucketAggs": [],
          "metrics": [
            {
              "id": "1",
              "settings": {
                "limit": "500"
              },
              "type": "logs"
            }
          ],
          "query": "_index = \"postgresql\" && message: \"%duration%\"",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "title": "PostgreSQL logs",
      "transformations": [
        {
          "id": "filterFieldsByName",
          "options": {
            "include": {
              "names": [
                "@timestamp",
                "message"
              ]
            }
          }
        }
      ],
      "type": "logs"
    }
  ],
  "refresh": "5s",
  "schemaVersion": 30,
  "style": "dark",
  "tags": [],
  "templating": {
    "list": []
  },
  "time": {
    "from": "now-30m",
    "to": "now"
  },
  "timepicker": {},
  "timezone": "",
  "title": "Application dashboard",
  "uid": "yYTbXuR7k",
  "version": 5
}

Reference

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-20-04

https://stackoverflow.com/questions/57346805/laravel-log-system-with-logstash

https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

https://medium.com/@bauernfeind.dominik/using-logstash-with-laravel-509c65065d52

https://www.elastic.co/guide/en/logstash/7.13/configuration-file-structure.html#codec

https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-grafana-on-ubuntu-20-04

https://otodiginet.com/software/how-to-install-prometheus-on-ubuntu-20-04-lts/

https://kifarunix.com/install-and-setup-prometheus-on-ubuntu-20-04/

https://kifarunix.com/monitor-linux-system-metrics-with-prometheus-node-exporter/

https://sysadmins.co.za/how-to-ingest-nginx-access-logs-to-elasticsearch-using-filebeat-and-logstash/

https://ubiq.co/database-blog/how-to-enable-slow-query-log-in-postgresql/

https://www.programmersought.com/article/87915966465/

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-nginx.html

https://sysadmins.co.za/how-to-ingest-nginx-access-logs-to-elasticsearch-using-filebeat-and-logstash/

https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html

https://www.elastic.co/guide/en/logstash/current/ls-security.html