syslog-ng Open Source Edition 3.17 - Administration Guide

Preface Introduction to syslog-ng The concepts of syslog-ng Installing syslog-ng The syslog-ng OSE quick-start guide The syslog-ng OSE configuration file source: Read, receive, and collect log messages
How sources work default-network-drivers: Receive and parse common syslog messages internal: Collecting internal messages file: Collecting messages from text files wildcard-file: Collecting messages from multiple text files linux-audit: Collecting messages from Linux audit logs network: Collecting messages using the RFC3164 protocol (network() driver) nodejs: Receiving JSON messages from nodejs applications mbox: Converting local e-mail messages to log messages osquery: Collect and parse osquery result logs pipe: Collecting messages from named pipes pacct: Collecting process accounting logs on Linux program: Receiving messages from external applications snmptrap: Read Net-SNMP traps sun-streams: Collecting messages on Sun Solaris syslog: Collecting messages using the IETF syslog protocol (syslog() driver) system: Collecting the system-specific log messages of a platform systemd-journal: Collecting messages from the systemd-journal system log storage systemd-syslog: Collecting systemd messages using a socket tcp, tcp6, udp, udp6: Collecting messages from remote hosts using the BSD syslog protocol— OBSOLETE unix-stream, unix-dgram: Collecting messages from UNIX domain sockets stdin: Collecting messages from the standard input stream
destination: Forward, send, and store log messages
amqp: Publishing messages using AMQP elasticsearch: Sending messages directly to Elasticsearch version 1.x elasticsearch2: Sending logs directly to Elasticsearch and Kibana 2.0 or higher file: Storing messages in plain-text files graphite: Sending metrics to Graphite Sending logs to Graylog hdfs: Storing messages on the Hadoop Distributed File System (HDFS) Posting messages over HTTP http: Posting messages over HTTP without Java kafka: Publishing messages to Apache Kafka loggly: Using Loggly logmatic: Using Logmatic.io mongodb: Storing messages in a MongoDB database network: Sending messages to a remote log server using the RFC3164 protocol (network() driver) osquery: Sending log messages to osquery's syslog table pipe: Sending messages to named pipes program: Sending messages to external applications pseudofile() redis: Storing name-value pairs in Redis riemann: Monitoring your data with Riemann smtp: Generating SMTP messages (e-mail) from logs Splunk: Sending log messages to Splunk sql: Storing messages in an SQL database stomp: Publishing messages using STOMP syslog: Sending messages to a remote logserver using the IETF-syslog protocol syslog-ng: Forwarding messages and tags to another syslog-ng node tcp, tcp6, udp, udp6: Sending messages to a remote log server using the legacy BSD-syslog protocol (tcp(), udp() drivers) Telegram: Sending messages to Telegram unix-stream, unix-dgram: Sending messages to UNIX domain sockets usertty: Sending messages to a user terminal: usertty() destination Write your own custom destination in Java or Python Client-side failover
log: Filter and route log messages using log paths, flags, and filters Global options of syslog-ng OSE TLS-encrypted message transfer template and rewrite: Format, modify, and manipulate log messages parser: Parse and segment structured messages db-parser: Process message content with a pattern database (patterndb) Correlating log messages Enriching log messages with external data Statistics of syslog-ng Multithreading and scaling in syslog-ng OSE Troubleshooting syslog-ng Best practices and examples The syslog-ng manual pages Third-party contributions Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd) License About us Third-party contributions

Parsing messages with comma-separated and similar values

The syslog-ng OSE application can separate parts of log messages (that is, the contents of the ${MESSAGE} macro) at delimiter characters or strings to named fields (columns). One way to achieve this is to use a csv (comma-separated-values) parser (for other methods and possibilities, see the other sections of parser: Parse and segment structured messages. The parsed fields act as user-defined macros that can be referenced in message templates, file- and tablenames, and so on.

Parsers are similar to filters: they must be defined in the syslog-ng OSE configuration file and used in the log statement. You can also define the parser inline in the log path.

NOTE:

The order of filters, rewriting rules, and parsers in the log statement is important, as they are processed sequentially.

To create a csv-parser(), you have to define the columns of the message, the separator characters or strings (also called delimiters, for example, semicolon or tabulator), and optionally the characters that are used to escape the delimiter characters (quote-pairs()).

Declaration:
parser <parser_name> {
    csv-parser(
        columns(column1, column2, ...)
        delimiters(chars("<delimiter_characters>"), strings("<delimiter_strings>"))
    );
};

Column names work like macros.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

Example: Segmenting hostnames separated with a dash

The following example separates hostnames like example-1 and example-2 into two parts.

parser p_hostname_segmentation {
    csv-parser(columns("HOSTNAME.NAME", "HOSTNAME.ID")
    delimiters("-")
    flags(escape-none)
    template("${HOST}"));
};
destination d_file {
    file("/var/log/messages-${HOSTNAME.NAME:-examplehost}");
};
log {
    source(s_local);
    parser(p_hostname_segmentation);
    destination(d_file);
};
Example: Parsing Apache log files

The following parser processes the log of Apache web servers and separates them into different fields. Apache log messages can be formatted like:

"%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %T %v"

Here is a sample message:

192.168.1.1 - - [31/Dec/2007:00:17:10 +0100] "GET /cgi-bin/example.cgi HTTP/1.1" 200 2708 "-" "curl/7.15.5 (i4 86-pc-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8c zlib/1.2.3 libidn/0.6.5" 2 example.mycompany

To parse such logs, the delimiter character is set to a single whitespace (delimiters(" ")). Whitespaces between quotes and brackets are ignored (quote-pairs('""[]')).

parser p_apache {
    csv-parser(
        columns("APACHE.CLIENT_IP", "APACHE.IDENT_NAME", "APACHE.USER_NAME",
        "APACHE.TIMESTAMP", "APACHE.REQUEST_URL", "APACHE.REQUEST_STATUS",
        "APACHE.CONTENT_LENGTH", "APACHE.REFERER", "APACHE.USER_AGENT",
        "APACHE.PROCESS_TIME", "APACHE.SERVER_NAME")
        flags(escape-double-char,strip-whitespace)
        delimiters(" ")
        quote-pairs('""[]')
    );
};

The results can be used for example to separate log messages into different files based on the APACHE.USER_NAME field. If the field is empty, the nouser name is assigned.

log {
    source(s_local);
    parser(p_apache);
    destination(d_file);
};
destination d_file {
    file("/var/log/messages-${APACHE.USER_NAME:-nouser}");
};
Example: Segmenting a part of a message

Multiple parsers can be used to split a part of an already parsed message into further segments. The following example splits the timestamp of a parsed Apache log message into separate fields.

parser p_apache_timestamp {
    csv-parser(
        columns("APACHE.TIMESTAMP.DAY", "APACHE.TIMESTAMP.MONTH", "APACHE.TIMESTAMP.YEAR", "APACHE.TIMESTAMP.HOUR", "APACHE.TIMESTAMP.MIN", "APACHE.TIMESTAMP.MIN", "APACHE.TIMESTAMP.ZONE")
        delimiters("/: ")
        flags(escape-none)
        template("${APACHE.TIMESTAMP}")
    );
};
log {
    source(s_local);
    parser(p_apache);
    parser(p_apache_timestamp);
    destination(d_file);
};
Further examples:

Was this topic helpful?

[Select Rating]



Options of CSV parsers

The syslog-ng OSE application can separate parts of log messages (that is, the contents of the ${MESSAGE} macro) at delimiter characters or strings to named fields (columns). One way to achieve this is to use a csv (comma-separated-values) parser (for other methods and possibilities, see the other sections of parser: Parse and segment structured messages. The parsed fields act as user-defined macros that can be referenced in message templates, file- and tablenames, and so on.

Parsers are similar to filters: they must be defined in the syslog-ng OSE configuration file and used in the log statement. You can also define the parser inline in the log path.

NOTE:

The order of filters, rewriting rules, and parsers in the log statement is important, as they are processed sequentially.

To create a csv-parser(), you have to define the columns of the message, the separator characters or strings (also called delimiters, for example, semicolon or tabulator), and optionally the characters that are used to escape the delimiter characters (quote-pairs()).

Declaration:
parser <parser_name> {
    csv-parser(
        columns(column1, column2, ...)
        delimiters(chars("<delimiter_characters>"), strings("<delimiter_strings>"))
    );
};

Column names work like macros.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

columns()
Synopsis: columns("PARSER.COLUMN1", "PARSER.COLUMN2", ...)

Description: Specifies the name of the columns to separate messages to. These names will be automatically available as macros. The values of these macros do not include the delimiters.

delimiters()
Synopsis:

delimiters(chars("<delimiter_characters>")) or delimiters("<delimiter_characters>")

delimiters(strings("<delimiter_string1>", "<delimiter_string2>", ...)")

delimiters(chars("<delimiter_characters>"), strings("<delimiter_string1>"))

Description: The delimiter is the character or string that separates the columns in the message. If you specify multiple characters using the delimiters(chars("<delimiter_characters>")) option, every character will be treated as a delimiter. To separate the columns at the tabulator (tab character), specify \t. For example, to separate the text at every hyphen (-) and colon (:) character, use delimiters(chars("-:")), Note that the delimiters will not be included in the column values.

String delimiters:

If you have to use a string as a delimiter, list your string delimiters in the delimiters(strings("<delimiter_string1>", "<delimiter_string2>", ...)") format.

By default, syslog-ng OSE uses space as a delimiter. If you want to use only the strings as delimiters, you have to disable the space delimiter, for example: delimiters(chars(""), strings("<delimiter_string>"))

Multiple delimiters:

If you use more than one delimiter, note the following points:

  • syslog-ng OSE will split the message at the nearest possible delimiter. The order of the delimiters in the configuration file does not matter.

  • You can use both string delimiters and character delimiters in a parser.

  • The string delimiters can include characters that are also used as character delimiters.

  • If a string delimiter and a character delimiter both match at the same position of the message, syslog-ng OSE uses the string delimiter.

dialect()
Synopsis: escape-none|escape-backslash|escape-double-char

Description: Specifies how to handle escaping in the parsed message. The following values are available. Default value: escape-none

  • escape-backslash: The parsed message uses the backslash (\) character to escape quote characters.

  • escape-double-char: The parsed message repeats the quote character when the quote character is used literally. For example, to escape a comma (,), the message contains two commas (,,).

  • escape-none: The parsed message does not use any escaping for using the quote character literally.

parser p_demo_parser {
    csv-parsercsv-parser(
        prefix(".csv.")
        delimiters(" ")
        dialect(escape-backslash)
        flags(strip-whitespace, greedy)
        columns("column1", "column2", "column3")
    );
};
flags()
Synopsis: drop-invalid, escape-none, escape-backslash, escape-double-char, greedy, strip-whitespace

Description: Specifies various options for parsing the message. The following flags are available:

  • drop-invalid: When the drop-invalid option is set, the parser does not process messages that do not match the parser. For example, a message does not match the parser if it has less columns than specified in the parser, or it has more columns but the greedy flag is not enabled. Using the drop-invalid option practically turns the parser into a special filter, that matches messages that have the predefined number of columns (using the specified delimiters).

    TIP:

    Messages dropped as invalid can be processed by a fallback log path. For details on the fallback option, see Log path flags.

  • escape-backslash: The parsed message uses the backslash (\) character to escape quote characters.

  • escape-double-char: The parsed message repeats the quote character when the quote character is used literally. For example, to escape a comma (,), the message contains two commas (,,).

  • escape-none: The parsed message does not use any escaping for using the quote character literally.

  • greedy: The greedy option assigns the remainder of the message to the last column, regardless of the delimiter characters set. You can use this option to process messages where the number of columns varies.

    Example: Adding the end of the message to the last column

    If the greedy option is enabled, the syslog-ng application adds the not-yet-parsed part of the message to the last column, ignoring any delimiter characters that may appear in this part of the message.

    For example, you receive the following comma-separated message: example 1, example2, example3, and you segment it with the following parser:

    csv-parser(columns("COLUMN1", "COLUMN2", "COLUMN3") delimiters(","));

    The COLUMN1, COLUMN2, and COLUMN3 variables will contain the strings example1, example2, and example3, respectively. If the message looks like example 1, example2, example3, some more information, then any text appearing after the third comma (that is, some more information) is not parsed, and possibly lost if you use only the variables to reconstruct the message (for example, to send it to different columns of an SQL table).

    Using the greedy flag will assign the remainder of the message to the last column, so that the COLUMN1, COLUMN2, and COLUMN3 variables will contain the strings example1, example2, and example3, some more information.

    csv-parser(columns("COLUMN1", "COLUMN2", "COLUMN3") delimiters(",") flags(greedy));
  • strip-whitespace: The strip-whitespace flag removes leading and trailing whitespaces from all columns.

null()
Synopsis: string

Description: If the value of a column is the value of the null() parameter, syslog-ng OSE changes the value of the column to an empty string. For example, if the columns of the message contain the "N/A" string to represent empty values, you can use the null("N/A") option to change these values to empty stings.

prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:

  • To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.

  • To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name}.

  • If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

quote-pairs()
Synopsis: quote-pairs('<quote_pairs>')

Description: List quote-pairs between single quotes. Delimiter characters or strings enclosed between quote characters are ignored. Note that the beginning and ending quote character does not have to be identical, for example [} can also be a quote-pair. For an example of using quote-pairs() to parse Apache log files, see Example: Parsing Apache log files.

template()
Synopsis: template("${<macroname>}")

Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).

For examples, see Example: Segmenting hostnames separated with a dash and Example: Segmenting a part of a message.


Was this topic helpful?

[Select Rating]



Parsing key=value pairs

The syslog-ng OSE application can separate a message consisting of whitespace or comma-separated key=value pairs (for example, Postfix log messages) into name-value pairs. You can also specify other separator character instead of the equal sign, for example, colon (:) to parse MySQL log messages. The syslog-ng OSE application automatically trims any leading or trailing whitespace characters from the keys and values, and also parses values that contain unquoted whitespace. For details on using value-pairs in syslog-ng OSE see Structuring macros, metadata, and other value-pairs.

You can refer to the separated parts of the message using the key of the value as a macro. For example, if the message contains KEY1=value1,KEY2=value2, you can refer to the values as ${KEY1} and ${KEY2}.

NOTE:

If a log message contains the same key multiple times (for example, key1=value1, key2=value2, key1=value3, key3=value4, key1=value5), then syslog-ng OSE stores only the last (rightmost) value for the key. Using the previous example, syslog-ng OSE will store the following pairs: key1=value5, key2=value2, key3=value4.

Caution:

If the names of keys in the message is the same as the names of syslog-ng OSE soft macros, the value from the parsed message will overwrite the value of the macro. For example, the PROGRAM=value1, MESSAGE=value2 content will overwrite the ${PROGRAM} and ${MESSAGE} macros. To avoid overwriting such macros, use the prefix() option.

Hard macros cannot be modified, so they will not be overwritten. For details on the macro types, see Hard vs. soft macros.

The parser discards message sections that are not key=value pairs, even if they appear between key=value pairs that can be parsed.

To parse key=value pairs, define a parser that has the kv-parser() option. Defining the prefix is optional. By default, the parser will process the ${MESSAGE} part of the log message. You can also define the parser inline in the log path.

Declaration:
parser parser_name {
    kv-parser(
        prefix()
    );
};
Example: Using a key=value parser

In the following example, the source is a log message consisting of comma-separated key=value pairs, for example, a Postfix log message:

Jun 20 12:05:12 mail.example.com <info> postfix/qmgr[35789]: EC2AC1947DA: from=<me@example.com>, size=807, nrcpt=1 (queue active)

The kv-parser inserts the ".kv." prefix before all extracted name-value pairs. The destination is a file, that uses the format-json template function. Every name-value pair that begins with a dot (".") character will be written to the file (dot-nv-pairs). The log line connects the source, the destination and the parser.

source s_kv {
    network(port(21514));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

parser p_kv {
    kv-parser (prefix(".kv."));
};

log {
    source(s_kv);
    parser(p_kv);
    destination(d_json);
};

You can also define the parser inline in the log path.

source s_kv {
    network(port(21514));
};

destination d_json {
    file("/tmp/test.json"
        template("$(format-json --scope dot-nv-pairs)\n"));
};

log {
    source(s_kv);
    parser {
        kv-parser (prefix(".kv."));
    };
    destination(d_json);
};

You can set the separator character between the key and the value to parse for example key:value pairs, like MySQL logs:

Mar  7 12:39:25 myhost MysqlClient[20824]: SYSTEM_USER:'oscar', MYSQL_USER:'my_oscar', CONNECTION_ID:23, DB_SERVER:'127.0.0.1', DB:'--', QUERY:'USE test;'
parser p_mysql {
    kv-parser(value-separator(":") prefix(".mysql."));
};

Was this topic helpful?

[Select Rating]



Options of key=value parsers

The kv-parser has the following options.

extract-stray-words-into()
Synopsis: extract-stray-words-into("<name-value-pair>")

Description: Specifies the name-value pair where syslog-ng OSE stores any stray words that appear before or between the parsed key-value pairs (mainly when the pair-separator() option is also set). If multiple stray words appear in a message, then syslog-ng OSE stores them as a comma-separated list. Note that the prefix() option does not affect the name-value pair storing the stray words. Default value: N/A

Example: Extracting stray words in key-value pairs

For example, consider the following message:

VSYS=public; Slot=5/1; protocol=17; source-ip=10.116.214.221; source-port=50989; destination-ip=172.16.236.16; destination-port=162;time=2016/02/18 16:00:07; interzone-emtn_s1_vpn-enodeb_om; inbound; policy=370;

This is a list of key-value pairs, where the value separator is = and the pair separator is ;. However, before the last key-value pair (policy=370), there are two stray words: interzone-emtn_s1_vpn-enodeb_om inbound. If you want to store or process these, specify a name-value pair to store them in the extract-stray-words-into() option, for example, extract-stray-words-into("my-stray-words"). The value of ${my-stray-words} for this message will be interzone-emtn_s1_vpn-enodeb_om, inbound

prefix()
Synopsis: prefix()

Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:

  • To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.

  • To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name}.

  • If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.

Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)

For example, to insert the postfix prefix when parsing Postfix log messages, use the prefix(.postfix.) option.

pair-separator()
Synopsis: pair-separator("<separator-string>")

Description: Specifies the character or string that separates the key-value pairs from each other. Default value: , (a comma followed by a whitespace)

For example, to parse key1=value1;key2=value2 pairs, use kv-parser(pair-separator(";"));

template()
Synopsis: template("${<macroname>}")

Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).

value-separator()
Synopsis: value-separator("<separator-character>")

Description: Specifies the character that separates the keys from the values. Default value: =

For example, to parse key:value pairs, use kv-parser(value-separator(":"));


Was this topic helpful?

[Select Rating]



Related Documents