Developing a syslog-ng configuration

This year I started publishing a syslog-ng tutorial series both on my blog and on YouTube: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/ And while the series was praised as the best possible introduction to syslog-ng, viewers also mentioned that one interesting element is missing from it: namely, it does not tell users how to develop a syslog-ng configuration.

So, in this blog, learn how to develop a syslog-ng configuration from the ground up! I will explain not just the end result, but also the process and the steps to take to develop a configuration. It starts with a single source and destination, then concludes with a conditional log path and sending parsed and enriched logs to Elasticsearch (or a compatible document store).

Before you begin

If you want to recreate each step in your own environment, you need some preparations. However, you can also follow everything I write in this blog without installing syslog-ng, simply by just looking at the configuration and the example input and output snippets. Also, you can skip the very last step of sending logs to Elasticsearch (or a compatible document store), as it is an optional step, and just the icing on the cake.

So, what do you need if you want to do more than just reading?

  • Syslog-ng 3.23 or later with JSON support enabled (JSON support is built into the Fedora/RHEL, openSUSE/SLES and FreeBSD packages, but it is a separate sub-package for Debian/Ubuntu).

  • Http() destination support, if you want to send logs to Elasticsearch (it is built into the FreeBSD package, but it is a separate sub-package for most Linux distributions).

  • GeoIP parser support (it is a separate sub-package for most Linux distributions, while on FreeBSD you need to compile it from ports).

  • Elasticsearch or OpenSearch and Kibana or OpenSearch Dashboards, if you want to visualize logs at the end.

  • Iptables log messages. I provide you with some sample logs, but it is a lot more fun to use live logs from your own firewall. Blush

The goal

We want to create a syslog-ng configuration which does the following:

  • Collects local log messages to a file.

  • Collects iptables log messages on a tcp() source.

  • Parses the iptables logs using the key-value parser.

  • Adds geo-location to source IP addresses in the iptables logs.

  • Saves the local logs and the parsed, enriched iptables logs to Elasticsearch.

Getting started

There are many ways to get started. When you install syslog-ng, it comes with a simple (or sometimes quite complex) basic configuration, collecting local logs and sorting them to dozens of different log files. The configuration bundled with the syslog-ng source code collects everything into a single file. You can also copy & paste a configuration from other blogs or the documentation. You can even start a new syslog-ng.conf from scratch.

No matter which method you choose, creating a configuration can prove quite difficult if you do not understand at least the basics of syslog-ng. The syslog-ng documentation is well over a thousand pages and contains all the little details of syslog-ng, so I would rather recommend my syslog-ng tutorial series: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/ Once you completed the tutorial series, you should be ready to get started.

Once people know the basics of syslog-ng, they typically start creating their configuration by searching for their use case on the web. These searches usually point people to the syslog-ng blog or to the syslog-ng documentation, and sometimes to other websites as well. As I learned, even One Identity engineers often start solving their problems by checking if their use-case is covered by the syslog-ng blog: https://www.syslog-ng.com/community/b/blog If it is, they read the blog and test the included configuration example in their own environment. Of course, a simple copy & paste is rarely enough: you might need to combine several different use cases in a single configuration, and you might also need different security options, batching, and so on. While reading the blogs can jumpstart your work, reading the documentation can help you adapt and fine tune syslog-ng to your exact use case. You can reach the syslog-ng documentation at: https://www.syslog-ng.com/technical-documents/list/syslog-ng-open-source-edition/

Saving local logs to a file

The first step is to create a configuration which collects local log messages and saves them to /var/log/messages without any further processing or filtering:

@version:4.3
@include "scl.conf"
source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

As always, the configuration starts with a version number declaration. You might find it annoying when you start a fresh configuration, but you will be thankful later, when you update syslog-ng and your old configuration still works as expected. Read more about this at: https://www.syslog-ng.com/community/b/blog/posts/backward-compatibility-in-syslog-ng-by-using-the-version-number-in-syslog-ng-conf

The next line includes the syslog-ng configuration library (SCL). It is a best practice to include it in your configuration, as many sources, parsers and destinations described in the documentation are actually defined here instead of the C code. This also includes the elasticsearch-http() destination, which we will use later in this configuration.

The next two lines are various building blocks used in the configuration. A source called s_sys, which collects local logs and syslog-ng’s own internal log messages, and a destination called d_mesg, saving logs to a file. This configuration is concluded by a log path, which connects these two building blocks, so the collected logs will be written to the destination.

You can easily test this configuration by starting syslog-ng with it and sending a few test messages using the logger command.

Adding a network source

Once the base configuration is tested to work, it is time to extend it. The first step is to add a network source, another file destination, and a log path connecting the two. Technically, it is not mandatory to add a second file destination at this stage, but we would need to add it in the next stage anyway.

@version:4.3
@include "scl.conf"

source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

source s_tcp { tcp(port(514)); };

destination d_file {
  file("/var/log/fromnet");
};

log {
  source(s_tcp);
  destination(d_file);
};

Here we added a tcp() source, another file destination, and a log path that connects the two. If you have some real firewall logs, route them to your newly prepared syslog-ng installation. Otherwise, you can use these sample logs. We are cheating a bit here, as these log messages are missing the syslog header (the date and the host). This does not matter now, but it will make your life easier later when sending logs to Elasticsearch. We send logs using loggen, which recreates the message header with the current date. You can check logs from the last 15 minutes in Kibana instead of trying to locate the exact date and time in the search interface…

Copy these lines into a text file:

kernel: INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=206.130.246.2 DST=11.11.11.100 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=44993 DF PROTO=TCP SPT=2577 DPT=80 WINDOW=17520 RES=0x00 ACK FIN URGP=0  
kernel: OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=61.195.125.157 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=45814 DF PROTO=TCP SPT=80 DPT=2444 WINDOW=6432 RES=0x00 ACK RST URGP=0  
kernel: INBOUND UDP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=212.123.153.188 DST=11.11.11.82 LEN=404 TOS=0x00 PREC=0x00 TTL=114 ID=19973 PROTO=UDP SPT=4429 DPT=1434 LEN=384  
kernel: INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=206.130.246.2 DST=11.11.11.100 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=9492 DF PROTO=TCP SPT=2577 DPT=80 WINDOW=17520 RES=0x00 ACK FIN URGP=0  
kernel: INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=4.60.2.210 DST=11.11.11.83 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=3024 DF PROTO=TCP SPT=3124 DPT=80 WINDOW=64240 RES=0x00 SYN URGP=0  
kernel: OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=220.210.69.62 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=50684 DF PROTO=TCP SPT=80 DPT=1325 WINDOW=6432 RES=0x00 ACK URGP=0  
kernel: INBOUND UDP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=194.250.174.113 DST=11.11.11.67 LEN=69 TOS=0x00 PREC=0x00 TTL=45 ID=0 DF PROTO=UDP SPT=1812 DPT=1812 LEN=49  
kernel: INBOUND UDP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=209.178.173.93 DST=11.11.11.85 LEN=404 TOS=0x00 PREC=0x00 TTL=111 ID=4912 PROTO=UDP SPT=1035 DPT=1434 LEN=384  
kernel: INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=206.130.246.2 DST=11.11.11.100 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=26589 DF PROTO=TCP SPT=2577 DPT=80 WINDOW=17520 RES=0x00 ACK FIN URGP=0  
kernel: OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=220.210.69.62 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=50688 DF PROTO=TCP SPT=80 DPT=1325 WINDOW=6432 RES=0x00 ACK URGP=0  

And now you can send these logs to port 514 using loggen, the bundled testing and benchmarking tool of syslog-ng:

loggen -i -S -d -R /root/logs/iptables_nohead_short 127.0.0.1 514

Check the loggen manual or the syslog-ng tutorial series to learn what these command line switches mean. You should see these logs with proper syslog headers including a current date in /var/log/fromnet:

[root@localhost ~]# tail -3 /var/log/fromnet
Aug 18 16:01:29 localhost kernel: INBOUND UDP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=209.178.173.93 DST=11.11.11.85 LEN=404 TOS=0x00 PREC=0x00 TTL=111 ID=4912 PROTO=UDP SPT=1035 DPT=1434 LEN=384  
Aug 18 16:01:29 localhost kernel: INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=206.130.246.2 DST=11.11.11.100 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=26589 DF PROTO=TCP SPT=2577 DPT=80 WINDOW=17520 RES=0x00 ACK FIN URGP=0  
Aug 18 16:01:29 localhost kernel: OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=220.210.69.62 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=50688 DF PROTO=TCP SPT=80 DPT=1325 WINDOW=6432 RES=0x00 ACK URGP=0  

Adding message parsing

In this step, we want to add geo-location to source IP addresses. However, incoming log messages are treated as a single string. To be able to work with individual fields in a log message, you need to parse the log message. There are many parsers in syslog-ng, but in this case, the key-value parser can find you the relevant information in the iptables logs.

To begin, we add a key-value parser to parse the iptables logs, then also add a JSON template to the file destination. Why a JSON template? Because the regular syslog template does not show the name-value pairs parsed from the log messages. Before we add the GeoIP parser, we want to actually see the name-value pairs from the message. Of course, you can also figure out the name of the name-value pairs yourself just by looking at the config and the log, but it is easier to do it incrementally.

@version:4.3
@include "scl.conf"

source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

source s_tcp { tcp(port(514)); };

parser p_kv {kv-parser(prefix("kv.")); };

destination d_file {
  file("/var/log/fromnet" template("$(format-json --scope rfc5424
        --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs
        --exclude DATE @timestamp=${ISODATE})\n\n")
  );
};

log {
  source(s_tcp);
  parser(p_kv);
  destination(d_file);
};

We added a kv-parser() to the configuration. The prefix() option adds a prefix to each created name-value pair to make sure that they are unique.

Oops, that template looks ugly! Yes, I copied it from an earlier blog. I know only a handful of syslog-ng users who write templates off the top of their heads. This template uses the format-json template function, and includes RFC5424 syslog name-value pairs, with any extra name-value pairs starting with or without a dot. The leading dot is removed, as it has a special meaning in Elasticsearch. The regular date macro is replaced with another name-value pair, which has the name and formatting expected by Elasticsearch. As you could guess by now, this template almost has the exact content and formatting needed to send log messages to Elasticsearch. The only difference is at the end with the two line feeds. You do not need those for Elasticsearch, but they make the JSON-formatted log files a lot easier to read.

If you test your configuration with some iptables logs, you should see something similar in the log file:

[root@localhost ~]# tail -4 /var/log/fromnet
{"kv":{"WINDOW":"17520","URGP":"0","TTL":"51","TOS":"0x00","SRC":"206.130.246.2","SPT":"2577","RES":"0x00 ACK FIN","PROTO":"TCP","PREC":"0x00","PHYSOUT":"eth1","PHYSIN":"eth0","OUT":"br0","LEN":"40","IN":"br0","ID":"26589 DF","DST":"11.11.11.100","DPT":"80"},"SOURCE":"s_tcp","PROGRAM":"kernel","PRIORITY":"notice","MESSAGE":"INBOUND TCP: IN=br0 PHYSIN=eth0 OUT=br0 PHYSOUT=eth1 SRC=206.130.246.2 DST=11.11.11.100 LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=26589 DF PROTO=TCP SPT=2577 DPT=80 WINDOW=17520 RES=0x00 ACK FIN URGP=0  ","LEGACY_MSGHDR":"kernel: ","HOST_FROM":"localhost","HOST":"localhost","FACILITY":"user","@timestamp":"2023-08-22T09:32:28+02:00"}

{"kv":{"WINDOW":"6432","URGP":"0","TTL":"64","TOS":"0x00","SRC":"11.11.11.71","SPT":"80","RES":"0x00 ACK","PROTO":"TCP","PREC":"0x00","PHYSOUT":"eth0","PHYSIN":"eth1","OUT":"br0","LEN":"40","IN":"br0","ID":"50688 DF","DST":"220.210.69.62","DPT":"1325"},"SOURCE":"s_tcp","PROGRAM":"kernel","PRIORITY":"notice","MESSAGE":"OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=220.210.69.62 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=50688 DF PROTO=TCP SPT=80 DPT=1325 WINDOW=6432 RES=0x00 ACK URGP=0  ","LEGACY_MSGHDR":"kernel: ","HOST_FROM":"localhost","HOST":"localhost","FACILITY":"user","@timestamp":"2023-08-22T09:32:28+02:00"}

Adding the GeoIP parser

From the previous log examples, you can see the iptables logs sliced into name-value pairs. In this case, we want to find the geo-location of the IP address stored in a name-value pair called “kv.SRC”.

@version:4.3
@include "scl.conf"

source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

source s_tcp { tcp(port(514)); };

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

destination d_file {
  file("/var/log/fromnet" template("$(format-json --scope rfc5424
        --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs
        --exclude DATE @timestamp=${ISODATE})\n\n")
  );
};

log {
  source(s_tcp);
  parser(p_kv);
  parser(p_geoip2);
  destination(d_file);
};

We pass “kv.SRC” to the geoip2() parser, along with a prefix to make sure that these name-value pairs are unique and stand for a database name. While getting GeoIP software is easy (it is part of most Linux distributions and FreeBSD ports), the database part needs registration to download. I do not know the current process for this, as I use a database file I downloaded years ago. It is good enough for testing, but not really suitable for a production environment.

In an ideal case, you should see a JSON-formatted log message in /var/log/fromnet, where you can see the parsed name-value pairs, the name-value about the geo-location, and the generic syslog fields.

{"kv":{"WINDOW":"6432","URGP":"0","TTL":"64","TOS":"0x00","SRC":"11.11.11.71","SPT":"80","RES":"0x00 ACK","PROTO":"TCP","PREC":"0x00","PHYSOUT":"eth0","PHYSIN":"eth1","OUT":"br0","LEN":"40","IN":"br0","ID":"50688 DF","DST":"220.210.69.62","DPT":"1325"},"geoip2":{"registered_country":{"names":{"en":"United States"},"iso_code":"US","geoname_id":"6252001"},"location":{"longitude":"-97.822000","latitude":"37.751000","accuracy_radius":"1000"},"country":{"names":{"en":"United States"},"iso_code":"US","geoname_id":"6252001"},"continent":{"names":{"en":"North America"},"geoname_id":"6255149","code":"NA"}},"SOURCE":"s_tcp","PROGRAM":"kernel","PRIORITY":"notice","MESSAGE":"OUTG CONN TCP: IN=br0 PHYSIN=eth1 OUT=br0 PHYSOUT=eth0 SRC=11.11.11.71 DST=220.210.69.62 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=50688 DF PROTO=TCP SPT=80 DPT=1325 WINDOW=6432 RES=0x00 ACK URGP=0  ","LEGACY_MSGHDR":"kernel: ","HOST_FROM":"localhost","HOST":"localhost","FACILITY":"user","@timestamp":"2023-08-22T09:55:08+02:00"}

Combined log path

At this point, we have all the message parsing ready. The next step is to combine the local source and the network source, then forward them to the same destination. Obviously, parsing system logs with the key-value and geoip2 parsers does not make much sense (it could lead to some very random and useless name-value pairs), so we also need to make parsing conditional. In earlier syslog-ng releases, this was pretty difficult, but now a simple “if” statement in the log statement solves this problem. The condition in this case can be based on the content of the “SOURCE” macro: we need to parse log messages if they come from the “s_tcp” source.

@version:4.3
@include "scl.conf"

source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

source s_tcp { tcp(port(514)); };

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

destination d_file {
  file("/var/log/fromnet" template("$(format-json --scope rfc5424
        --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs
        --exclude DATE @timestamp=${ISODATE})\n\n")
  );
};

log {
  source(s_tcp);
  source(s_sys);
  if (match("s_tcp" value("SOURCE"))) {
        parser(p_kv);
        parser(p_geoip2);
  };
  destination(d_file);
};

Suddenly, system logs will also be included, but without all the extra parsing:

[root@localhost syslog-ng]# tail -1 /var/log/fromnet
{"journald":{"_UID":"0","_TRANSPORT":"syslog","_SYSTEMD_UNIT":"sshd.service","_SYSTEMD_SLICE":"system.slice","_SYSTEMD
_INVOCATION_ID":"ec093af18bc74da28c5a21945ac61b28","_SYSTEMD_CGROUP":"/system.slice/sshd.service","_SOURCE_REALTIME_TI
MESTAMP":"1692695378796087","_PID":"4082","_MACHINE_ID":"c64f1032243d40d18310771f8fd94b56","_HOSTNAME":"localhost.loca
ldomain","_GID":"0","_EXE":"/usr/sbin/sshd","_COMM":"sshd","_CMDLINE":"sshd: czanik [priv]","_CAP_EFFECTIVE":"1fffffff
fff","_BOOT_ID":"c2453066d0844251b14afc619abfb6ed","SYSLOG_PID":"4082","SYSLOG_IDENTIFIER":"sshd","SYSLOG_FACILITY":"1
0","PRIORITY":"6","MESSAGE":"pam_unix(sshd:session): session opened for user czanik by (uid=0)"},"SOURCE":"s_sys","PRO
GRAM":"sshd","PRIORITY":"info","PID":"4082","MESSAGE":"pam_unix(sshd:session): session opened for user czanik by (uid=
0)","HOST_FROM":"localhost","HOST":"localhost","FACILITY":"authpriv","@timestamp":"2023-08-22T11:09:38+02:00"}

If you send a few iptables logs to the tcp port, they will still be parsed correctly.

Sending logs to Elasticsearch

Up until now, you could do every step without Elasticsearch, even if the output is (almost) ready to be sent to Elasticsearch (or a compatible document store). I used text files instead of Elasticsearch in this tutorial not just because you could follow this blog even without Elasticsearch, but also because using text files enables you much easier and faster debugging. Sending logs to Elasticsearch or a compatible database, you can easily run into problems with SELinux, the firewall, TLS, the authentications, and the configuration itself. While text files are dumber, they are also a lot less prone to errors.

In this step, we add an Elasticsearch destination. This is also the step where I had to find one of my earlier blogs, as it is not as simple as adding a new destination. You also need a rewrite rule on the syslog-ng side to make sure that geo-location is sent in a format that is suitable for Elasticsearch. Also, you need to make sure that Elasticsearch knows that a given name-value pair has geo-location information inside (this is called “mapping”). The Kibana web interface changed too many times to allow me to reasonably describe how to add it there, so let me instead just list the JSON code referring to the current syslog-ng configuration:

{
  "mappings" : {
    "properties" : {
      "geoip2" : {
        "properties" : {
          "location2" : {
            "type" : "geo_point"
          }
        }
      }
    }
  }
}

You can learn more about mapping in the Elasticsearch documentation. To continue with this tutorial, here is the complete syslog-ng configuration with the necessary rewrites and the elasticsearch-http() destination:

@version:4.3
@include "scl.conf"
source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
log { source(s_sys); destination(d_mesg); };

source s_tcp {
  tcp(ip("0.0.0.0") port("514"));
};

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

rewrite r_geoip2 {
    set(
        "${geoip2.location.latitude},${geoip2.location.longitude}",
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")
    );
};

destination d_elasticsearch_http {
    elasticsearch-http(
        index("syslog-ng")
        type("")
        user("elastic")
        password("Gr3CmhxxxxxxuCWB")
        url("https://localhost:9200/_bulk")
        template("$(format-json --scope rfc5424 --scope dot-nv-pairs
        --rekey .* --shift 1 --scope nv-pairs
        --exclude DATE @timestamp=${ISODATE})")
        tls(peer-verify(no))
    );
};


log {
    source(s_sys);
    source(s_tcp);
    if (match("s_tcp" value("SOURCE"))) {
        parser(p_kv);
        parser(p_geoip2);
        rewrite(r_geoip2);
    };
    destination(d_elasticsearch_http);
    flags(flow-control);
};

Compared to the previous configuration, you should see a couple of changes in the final iteration. Firstly, as I mentioned earlier, I added a rewrite rule. This ensures that syslog-ng combines longitude and latitude information into a single name-value pair, but only if they are not empty. The file destination is replaced by an elasticsearch-http() destination. After so many years, Elasticsearch now comes with basic security enabled, so this configuration also has authentication enabled. TLS support is also there, but the keys are not strictly checked. The template is almost the same as with the file destination – however, the two extra line feeds are removed here.

Finally, the log path has flow-control enabled, which means that if syslog-ng feels that Elasticsearch is too slow to receive log messages, it slows down receiving log messages. Of course, this does not work with a UDP source, only with TCP.

At this point, I must also admit that I skipped an extra step: modifying the configuration to use the file destination and Elasticsearch in parallel. By doing so, you can check if your Elasticsearch destination works properly. If it has the same logs as the file destination, then you are safe to remove (or rather comment out) the file destination from the log path.

What is next?

If you read / viewed my syslog-ng tutorial series and read through this blog on how this configuration was built, you should be able to build your next configuration on your own. However, if you had any problems understanding this configuration, you should (re-)watch the tutorial episodes. As usual, the syslog-ng blog and documentation are there as the primary sources of information. If you get stuck, you can also ask for help on the syslog-ng mailing list, Gitter or GitHub issues / discussions.

-

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.

Related Content