Checkmk
to checkmk.com

1. Introduction

Notification icon.

In Checkmk, notification means that users are actively informed in the case of problems or other events in the monitoring. This is most commonly achieved using emails. However, there are also many other methods, such as sending SMS or forwarding to a ticket system. Checkmk provides a simple interface for writing scripts for your own notification methods.

The starting point for any notification to a user is an event reported by the monitoring core. We call this a monitoring event in this article to avoid confusion with the events processed by the Event Console. A monitoring event is always related to a particular host or service. Possible types of monitoring events are:

Checkmk utilizes a rule-based system that allows you to create user notifications from these monitoring events — and this can also be used to implement very demanding requirements. A simple notification by email — which is entirely satisfactory in many cases — is nonetheless quick to set up.

This article mainly deals with the basics and general questions about notifications.

If instead you would like to start directly with the implementation: Checkmk generally distinguishes between two ways of defining notifications. On the one hand, the rules for notifications are defined globally. These rules apply to all affected users and groups depending on the event. The creation of these notifications is described under Setting up notifications by rules.

At the same time, each user has the option to influence the notification settings individually. For example, a contact person can deactivate the delivery of notifications to their own inbox while on vacation. You can read how these personal settings can be implemented in the article Personal notification rules.

2. To notify, or not (yet) to notify?

Notifications are basically optional, and Checkmk can still be used efficiently without them. Some large organizations have a sort of control panel in which an operations team has the Checkmk interface constantly under observation, and thus additional notifications are unnecessary. If your Checkmk environment is still under construction, it should be considered that notifications will only be of help to your colleagues when no — or only occasional — false alarms (false positives) are produced. One first needs to come to grips with the threshold values and all other settings, so that all states are OK / UP — or in other words: everything is ‘green’.

Acceptance of the new monitoring will quickly fade if every day the inbox is flooded with hundreds of useless emails.

The following procedure has been proven to be effective for the fine-tuning of notifications:

Step 1: Fine-tune the monitoring, on the one hand, by fixing any actual problems newly uncovered by Checkmk and, on the other hand, by eliminating false alarms. Do this until everything is ‘normally’ OK / UP. See the Beginner’s Guide for some recommendations for reducing typical false alarms.

Step 2: Next switch the notifications to be active only for yourself. Reduce the ‘static’ caused by sporadic, short duration problems. To do this, adjust further threshold values, use predictive monitoring if necessary, increase the number of check attempts or try delayed notifications. And of course if genuine problems are responsible, attempt to get these under control.

Step 3: Once your own inbox is tolerably peaceful, activate the notifications for your colleagues. Create efficient contact groups so that each contact only receives notifications relevant to them.

These procedures will result in a system which provides relevant information that assists in reducing outages.

3. When notifications are generated and how to deal with them

3.1. Introduction

A large part of the Checkmk notification system’s complexity is due to its numerous tuning options, with which unimportant notifications can be avoided. Most of these will be situations in which notifications are already being delayed or suppressed when they occur. Additionally, the monitoring core has a built-in intelligence that suppresses certain notifications by default. We would like to address all of these aspects in this chapter.

3.2. Scheduled downtimes

Icon of a scheduled downtime.

When a host or service is in a scheduled downtime the object’s notifications will be suppressed. This is – alongside a correct evaluation of availabilities — the most important reason for the actual provision of downtimes in monitoring. The following details are relevant to this:

  • If a host is flagged as having a scheduled downtime, then all of its services will also be automatically in scheduled downtime – without an explicit entry for them needing to be entered.

  • Should an object enter a problem state during a scheduled downtime, when the downtime ends as planned this problem will be retroactively notified precisely at the end of the downtime.

  • The beginning and the end of a scheduled downtime is itself a monitoring event which will be notified.

3.3. Notification periods

Icon of an inactive notification period.

You can define a notification period for each host and service during configuration. This is a time period which defines the time frame within which the notification should be constrained.

The configuration is performed using the Monitoring Configuration > Notification period for hosts, or respectively the Notification period for services rule set, which you can quickly find via the search in Setup menu. An object that is not currently in a notification period will be flagged with a gray pause icon Icon of an inactive notification period..

Monitoring events for an object that is not currently in its notification period will not be notified. Such notifications will be ‘reissued’ when the notification period is again active – if the host/service is still in a problem state. Only the latest state will be notified even if multiple changes to the object’s state have occurred during the time outside the notification period.

Incidentally, in the notification rules it is also possible to restrict a notification to a specific time period. In this way you can additionally restrict the time ranges. However, notifications that have been discarded due to a rule with time conditions will not automatically be repeated later!

3.4. The state of the host on which a service is running

If a host has completely failed, or is at least inaccessible to the monitoring, then obviously its services can no longer be monitored. Active checks will then as a rule register CRIT or UNKNOWN, since these will be actively attempting to access the host and will thereby run into an error. In such a situation all other checks — thus the great majority — will be omitted and will thus remain in their old state. These will be flagged with the stale time icon icon stale.

It would naturally be very cumbersome if all active checks in such a state were to notify their problems. For example, if a web server is not reachable – and this has already been notified – it would not be very helpful to additionally generate an email for every single one of its dependent HTTP services.

To minimize such situations, as a basic principle the monitoring core only generates notifications for services if the host is in the UP state. This is also the reason why host accessibility is separately verified. If not otherwise configured, this verification will be achieved with a Smart Ping or ping.

CRE If you are using Checkmk Raw (or one of the commercial editions with a Nagios core), in isolated cases it can nonetheless occur that a host problem generates a notification for an active service. The reason for this is that Nagios regards the results of host checks as still being valid for a short time into the future. If even only a few seconds have elapsed between the last successful ping to the server and the next active check, Nagios can still assess the host as UP even though it is in fact DOWN. In contrast, the Checkmk Micro Core (CMC) will hold the service notification in a ‘standby’ mode until the host state has been verified, thus reliably minimizing undesired notifications.

3.5. Parent hosts

Imagine that an important network router to a company location with hundreds of hosts fails. All of its hosts will then be unavailable to the monitoring and become DOWN. Hundreds of notifications will therefore be triggered. Not good.

In order to avoid such problems the router can be defined as a parent host for its hosts. If there are redundant hosts, multiple parents can also be defined. As soon as all parents enter a DOWN state, the hosts that are no longer reachable will be flagged with the UNREACH state and their notifications will be suppressed. The problem with the router itself will of course still be notified.

CEE By the way, the CMC operates internally in a slightly different manner to Nagios. In order to reduce false alarms, but still process genuine notifications, the CMC pays very close attention to the exact times of the relevant host checks. If a host check fails the core will wait for the result of the host check on the parent host before generating a notification. This wait is asynchronous and has no effect on the general monitoring. Notifications from hosts can thereby be subject to minimal delays.

4. Controlling notifications

4.1. The principle

Checkmk is configured ‘by default’ so that when a monitoring event occurs a notification email is sent to every contact for the affected host or service. This is certainly initially sensible, but in practice many further requirements arise, for example:

  • The suppression of specific, less useful notifications.

  • The ‘subscription’ to notifications from services for which one is not a contact.

  • A notification can be sent by email, SMS or pager, depending on the time of day.

  • The escalation of problems when no acknowledgment has been received beyond a certain time limit.

  • The option of no notification for the WARN or UNKNOWN states.

  • and much more …​

Checkmk provides you with maximum flexibility in implementing such requirements via its rule-based mechanism.

In the notification configuration, you manage the chain of notification rules, which determine who should be notified and how. When any monitoring event occurs this rule chain will be run through from top to bottom. Each rule has a condition that decides whether the rule actually applies to the situation in question.

If the condition is satisfied the rule determines two things:

  • A selection of contacts (Who should be notified?).

  • A notification method (How to notify?), e.g. HTML email, and optionally, additional parameters for the chosen method.

Important: In contrast to the rules for hosts and services, here the evaluation also continues after the applicable rule has been satisfied. Subsequent rules can add further notifications. Notifications generated by preceding rules can also be deleted.

The end result of the rule evaluation will be table with a structure something like this:

Who (contact) How (method) Parameters for the method

Harry Hirsch

Email

Reply-To: linux.group@example.com

Bruno Weizenkeim

Email

Reply-To: linux.group@example.com

Bruno Weizenkeim

SMS

Now, for each entry in this table the notification script which actually executes the user notification appropriate to the method will be invoked.

4.2. Disabling notifications

Disabling using rules

With the Enable/disable notifications for hosts, or respectively, the Enable/disable notifications for services rule sets you can specify hosts and services for which generally no notifications are to be issued. As mentioned above the core then suppresses notifications. A subsequent notification rule that ‘subscribes’ to notifications for such services will be ineffective, as the notifications are simply not generated.

Disabling using commands

Icon of a disabled notification.

It is also possible to temporarily disable notifications for individual hosts or services via a command.

However, this requires that the permission Commands on host and services > Enable/disable notifications is assigned to the user role. By default, this is not the case for any role.

With the assigned permission, you can disable (and later enable) notifications from hosts and services with the Commands > Notifications command:

Command to enable and disable notifications.

Such hosts or services will then be marked with an icon notif man disabled icon.

Since commands — in contrast to rules — require neither configuration permissions nor an activate changes, they can be a quick workaround for reacting promptly to a situation.

Important: In contrast to Icon of a scheduled downtime. scheduled downtimes, disabled notifications have no influence on the availability evaluations. If during an unplanned outage you really only want to disable the notifications without wishing to distort the availability statistics, you should not register a scheduled downtime!

Disabling globally

In the Master control snap-in in the sidebar you will find a master switch for Notifications:

Master control snap-in.

This switch is incredibly useful if you plan to make bigger system changes, during which an error could under the circumstances force many services into a CRIT state. You can use the switch to avoid upsetting your colleagues with a flood of useless emails. Remember to re-enable the notifications when you are finished.

Each site in a distributed monitoring has one of these switches. Switching off the central site’s notifications still allows remote sites to activate notifications — even though these are directed to and delivered from the central site.

Important: Notifications that would have been triggered during the time when notifications were disabled will not be repeated later when they are re-enabled.

4.3. Delaying notifications

You may possibly have services that occasionally enter a problem state for short periods, but the stops are very brief and are not critical for you. In such cases notifications are very annoying, but are easily suppressed. The Delay host notifications and Delay service notifications rule sets serve this situation.

You specify a time in minutes here — and a notification will be delayed until this time has expired. Should the OK / UP state occurs again before then, no notification will be triggered. Naturally this also means that the notification of a genuine problem will be delayed.

Obviously even better than delaying notifications would be the elimination of the actual cause of the sporadic problems — but that is of course another story…​

4.4. Repeated check attempts

Another very similar method for delaying notifications is to allow multiple check attempts when a service enters a problem state. This is achieved with the Maximum number of check attempts for hosts, or respectively, the Maximum number of check attempts for service rule set.

If you set a value of 3 here, for example, a check with a CRIT result will at first not trigger a notification. This is referred to as a CRIT soft state. The hard state remains OK. Only if three successive attempts return a not-OK-state will the service switch to the hard state and a notification be triggered.

In contrast to delayed notifications, here you have the option of defining views so that such problems are not displayed. A BI aggregation can also be constructed so that only hard states are included — not soft ones.

4.5. Flapping hosts and services

Icon indicating flapping state.

When a host or service frequently changes its state over a short time it is regarded as flapping. This is an actual state. The principle here is the reduction of excessive notifications during phases when a service is not (quite) running stably. Such phases can also be specially evaluated in the availability statistics.

Flapping objects are marked with the Icon indicating flapping state. icon. As long as an object is flapping, successive state changes trigger no further notifications. A notification will however be triggered whenever the object enters or leaves the flapping state.

The system’s recognition of flapping can be influenced in the following ways:

  • The Master control has a main switch for controlling the detection of flapping (Flap Detection).

  • You can exclude objects from detection by using the Enable/disable flapping detection for hosts, or respectively, the Enable/disable flapping detection for services rule set.

  • In the commercial editions, using Global settings > Monitoring core > Tuning of flap detection you can define the parameters for flapping detection and set them to be more or less sensitive:

Global settings for flap detection handling.

Show the context sensitive help with Help > Show inline help for details on the customizable values.

5. The path of a notification from beginning to end

5.1. The notification history

To get started, we will show you how to view the history of notifications at the host and service level in Checkmk to be able to track the notification process.

A monitoring event that causes Checkmk to trigger a notification is, for example, the change of state of a service. You can manually trigger this state change with the Fake check results command for testing purposes.

For a notification test, you can move a service from the OK state to CRIT in this way. If you now display the notifications for this service on the service details page with Service > Service notifications, you will see the following entries:

List of accumulated notifications for a service.

The most recent entry is at the top of the list. However, the first entry is at the bottom, so let’s look at the individual entries from bottom to top:

  1. The monitoring core logs the monitoring event of the state change. The icon alert crit icon in the 1st column indicates the state (CRIT in the example).

  2. The monitoring core generates a icon alert cmk notify raw notification. This is passed by the core to the notification module, which performs the evaluation of the applicable notification rules.

  3. The evaluation of the rules results in a icon alert notify user notification to the user hh with the method mail.

  4. The icon alert notify result notification result shows that the email was successfully handed over to the SMTP server for delivery.

To help in the correct understanding of the contexts for all of the various setting options and basic conditions, and to enable an accurate problem diagnosis when a notification appears or does not appear as expected, here we will describe all of the details of the notification process including all of the components involved.

Tip

The notification history that we have shown above for a service can also be displayed for a host: on the host details page in the Host menu for the host itself (Notifications of host menu item) and also for the host with all its services (Notifications of host & services).

5.2. The components

The following components are involved in the Checkmk notification system:

Component Function Log file

Nagios

The monitoring core in CRE Checkmk Raw that detects monitoring events and generates raw notifications.

~/var/log/nagios.log
~/var/nagios/debug.log

Checkmk Micro Core (CMC)

The monitoring core in the commercial editions that performs the same function as Nagios in Checkmk Raw.

~/var/log/cmc.log

Notification module

Processes the notification rules in order to create a user notification from a raw notification. It calls up the notification scripts.

~/var/log/notify.log

Notification spooler (commercial editions only)

Asynchronous delivery of notifications, and centralized notifications in distributed environments.

~/var/log/mknotifyd.log

Notification script

For every notification method there is a script which processes the actual delivery (e.g., generates and sends an HTML email).

~/var/log/notify.log

5.3. The monitoring core

Raw notifications

As described above, every notification begins with a monitoring event in the monitoring core. If all conditions have been satisfied and a ‘green light’ for a notification can be given, the core generates a raw notification to the internal check-mk-notify help contact. The raw notification doesn’t yet contain details of the actual contacts or of the notification method.

The raw notification looks like this in the service’s notification history:

A raw notification in the notification history.
  • The icon is a icon alert cmk notify light-gray loudspeaker

  • check-mk-notify is given as the contact.

  • check-mk-notify is given as the notification command.

The raw notification then passes to the Checkmk notification module, which processes the notification rules. This module is called up as an external program by Nagios (cmk --notify). The CMC keeps the module on standby as a permanent auxiliary process (notification helper), thus reducing process-creation and saving machine time.

Error diagnosis in the Nagios monitoring core

CRE The Nagios core used in CRE Checkmk Raw logs all monitoring events to ~/var/log/nagios.log. This file is simultaneously the location where it stores the notification history — which is also queried using the GUI if, for example, you wish to see a host’s or service’s notifications.

More interesting however are the messages you find in the ~/var/nagios/debug.log file which you receive if you set the debug_level variable to 32 in etc/nagios/nagios.d/logging.cfg.

Following a core restart …​

OMD[mysite]:~$ omd restart nagios

… you will find useful information on the reasons notifications were created or suppressed:

~/var/nagios/debug.log
[1592405483.152931] [032.0] [pid=18122] ** Service Notification Attempt ** Host: 'localhost', Service: 'backup4', Type: 0, Options: 0, Current State: 2, Last Notification: Wed Jun 17 16:24:06 2020
[1592405483.152941] [032.0] [pid=18122] Notification viability test passed.
[1592405485.285985] [032.0] [pid=18122] 1 contacts were notified.  Next possible notification time: Wed Jun 17 16:51:23 2020
[1592405485.286013] [032.0] [pid=18122] 1 contacts were notified.

Error diagnosis in the CMC monitoring core

CEE In the commercial editions you can find a protocol from the monitoring core in the ~/var/log/cmc.log log file. In the standard installation this file contains no information regarding notifications. You can however activate a very detailed logging function with Global settings > Monitoring Core > Logging of the notification mechanics. The core will then provide information on why — or why not (yet) — a monitoring event prompts it to pass a notification to the notification system:

OMD[mysite]:~$ tail -f var/log/cmc.log
2021-08-26 16:12:37 [5] [core 27532] Executing external command: PROCESS_SERVICE_CHECK_RESULT;mysrv;CPU load;1;test
2021-08-26 16:12:43 [5] [core 27532] Executing external command: LOG;SERVICE NOTIFICATION: hh;mysrv;CPU load;WARNING;mail;test
2021-08-26 16:12:52 [5] [core 27532] Executing external command: LOG;SERVICE NOTIFICATION RESULT: hh;mysrv;CPU load;OK;mail;success 250 - b'2.0.0 Ok: queued as 482477F567B';success 250 - b'2.0.0 Ok: queued as 482477F567B'

Note: Turning on logging to notifications can generate a lot of messages. It is however useful when one later asks why a notification was not generated in a particular situation.

5.4. Rule evaluation by the notification module

Once the core has generated a raw notification, this runs through the chain of notification rules – resulting in a table of notifications. Alongside the data from the raw notification, every notification contains the following additional information:

  • The contact to be notified

  • The notification method

  • The parameters for this method

In a synchronous delivery, for every entry in the table an appropriate notification script will now be executed. In an asynchronous delivery a notification will be passed as a file to the notification spooler.

Analysis of the rule chain

When you create more complex rule regimes the question of which rules will apply to a specific notification will certainly come up. For this Checkmk provides a built-in analysis function under Setup > Setup > Analyze recent notifications.

In the analysis mode, by default the last ten raw notifications generated by the system and processed through the rules will be displayed:

List of the last 10 raw notifications in analysis mode.

Should you need to analyze a larger number of raw notifications, you can easily increase the number stored for analysis via Global settings > Notifications > Store notifications for rule analysis:

Global setting for the number of raw notifications displayed.

For each of these raw notifications three actions will be available to you:

Icon to test the rule chain.

Tests the rule chain, in which every rule will be checked if all conditions for the rule have been satisfied for the selected monitoring event. The resulting table of notifications will be displayed with the rules.

Icon to display the notification context.

Displays the complete notification context.

Raw notification reload icon.

Repeats this raw notification as if it has just appeared. Otherwise the display is the same as in the analysis. With this you can not only check the rule’s conditions, but also test how a notification looks visually.

Error diagnosis

If you have performed the rule chain test (for testing the rule chain.), you can see which rules Symbol in green have been applied or Symbol in gray have not been applied to a monitoring event:

List of applied and not applied rules.

If a rule was not applied, move the mouse over the gray circle to see the hint (mouse-over text):

Hint when a rule has not been applied.

However, this mouse-over text uses abbreviations for the causes a rule was not applied. These refer to the Host events or Service events conditions of the rule.

Host event types

Abbreviation

Meaning

Description

rd

UP ➤ DOWN

Host state changed from UP to DOWN

ru

UP ➤ UNREACHABLE

Host state changed from UP to UNREACH

dr

DOWN ➤ UP

Host state changed from DOWN to UP

du

DOWN ➤ UNREACHABLE

Host state changed from DOWN to UNREACH

ud

UNREACHABLE ➤ DOWN

Host state changed from UNREACH to DOWN

ur

UNREACHABLE ➤ UP

Host state changed from UNREACH to UP

?r

any ➤ UP

Host state changed from any state to UP

?d

any ➤ DOWN

Host state changed from any state to DOWN

?u

any ➤ UNREACHABLE

Host state changed from any state to UNREACH

f

Start or end of flapping state

s

Start or end of a scheduled downtime

x

Acknowledgment of problem

as

Alert handler execution, successful

af

Alert handler execution, failed

Service event types

Abbreviation

Meaning

Description

rw

OK ➤ WARN

Service state changed from OK to WARN

rr

OK ➤ OK

Service state changed from OK to OK

rc

OK ➤ CRIT

Service state changed from OK to CRIT

ru

OK ➤ UNKNOWN

Service state changed from OK to UNKNOWN

wr

WARN ➤ OK

Service state changed from WARN to OK

wc

WARN ➤ CRIT

Service state changed from WARN to CRIT

wu

WARN ➤ UNKNOWN

Service state changed from WARN to UNKNOWN

cr

CRIT ➤ OK

Service state changed from CRIT to OK

cw

CRIT ➤ WARN

Service state changed from CRIT to WARN

cu

CRIT ➤ UNKNOWN

Service state changed from CRIT to UNKNOWN

ur

UNKNOWN ➤ OK

Service state changed from UNKNOWN to OK

uw

UNKNOWN ➤ WARN

Service state changed from UNKNOWN to WARN

uc

UNKNOWN ➤ CRIT

Service state changed from UNKNOWN to CRIT

?r

any ➤ OK

Service state changed from any state to OK

?w

any ➤ WARN

Service state changed from any state to WARN

?c

any ➤ CRIT

Service state changed from any state to CRIT

?u

any ➤ UNKNOWN

Service state changed from any state to UNKNOWN

Based on these hints you can check and revise your rules.

Another important diagnostic option is the log file ~/var/log/notify.log. During tests with the notifications, the popular command tail -f is useful for this:

OMD[mysite]:~$ tail -f var/log/notify.log
2025-04-09 08:02:49,302 [15] [cmk.base.notify]  -> does not match: Event type 'rd' not handled by this rule. Allowed are: du, ?r
2025-04-09 08:02:49,303 [20] [cmk.base.notify] User cmkadmin's rule 'my test notification'...
2025-04-09 08:02:49,303 [20] [cmk.base.notify]  -> matches!
2025-04-09 08:02:49,303 [20] [cmk.base.notify]    - adding notification of cmkadmin via mail
2025-04-09 08:02:49,303 [20] [cmk.base.notify] User peter's rule 'test notification of peter'...
2025-04-09 08:02:49,303 [20] [cmk.base.notify]  -> matches!
2025-04-09 08:02:49,303 [20] [cmk.base.notify]    - modifying notification of peter via mail
2025-04-09 08:02:49,303 [20] [cmk.base.notify] Executing 2 notifications:
2025-04-09 08:02:49,303 [20] [cmk.base.notify]   * would notify peter via mail, parameters: graphs_per_notification, notifications_with_graphs, matching_rule_nr, matching_rule_text, bulk: no
2025-04-09 08:02:49,303 [20] [cmk.base.notify]   * would notify cmkadmin via mail, parameters: graphs_per_notification, notifications_with_graphs, matching_rule_nr, matching_rule_text, bulk: no

With Global settings > Notifications > Notification log level you can control the comprehensiveness of the notifications in three levels. Set this to Full dump of all variables and command, and in the log file you will find a complete listing of all of the variables available to the notification script:

Global setting to specify the log level.

For example, the list will appear like this (extract):

~/var/log/notify.log
2025-04-09 08:47:39,186 [10] [cmk.base.notify] Raw context:
                    CONTACTS=hh
                    HOSTACKAUTHOR=
                    HOSTACKCOMMENT=
                    HOSTADDRESS=127.0.0.1
                    HOSTALIAS=localhost
                    HOSTATTEMPT=1
                    HOSTCHECKCOMMAND=check-mk-host-smart

5.5. Asynchronous delivery via the notification spooler

CEE A powerful supplementary function of the commercial editions is the notification spooler. This enables an asynchronous delivery of notifications. What does asynchronous mean in this context?

  • Synchronous delivery: The notification module waits until the notification script has finished executing. If this takes a long time to execute, more notifications will pile up. If monitoring is stopped, these notifications are lost. In addition, if many notifications are generated over a short period of time, a backlog may build up to the core, causing the monitoring to stall.

  • Asynchronous delivery: Every notification will be saved to a spool file under ~/var/check_mk/notify/spool. No jam can build up. If the monitoring is stopped the spool files will be retained and notifications can later be delivered correctly The notification spooler takes over the processing of the spool files.

A synchronous delivery is then feasible if the notification script runs quickly, and above all can’t lead to some sort of timeout. With notification methods that access existing spoolers that is a given. Spool services from the system can be used particularly with email and SMS. The notification script passes a file to the spooler — with this procedure no wait state can occur.

When using the traceable delivery via SMTP or other scripts which establish network connections, you should always employ asynchronous delivery. This also applies to scripts that send text messages (SMS) via HTTP over the internet. The timeouts when building a connection to a network service can take up to several minutes, causing a jam as described above.

The good news is that asynchronous delivery is enabled by default in Checkmk. For one thing, the notification spooler (mknotifyd) is also started when the site is started, which you can check with the following command:

OMD[mysite]:~$ omd status mknotifyd
mknotifyd:      running
-----------------------
Overall state:  running

On the other hand, asynchronous delivery (Asynchronous local delivery by notification spooler) is selected in Global settings > Notifications > Notification Spooling:

Global setting for the notification spooler delivery method.

Error diagnosis

The notification spooler maintains its own log file: ~/var/log/mknotifyd.log. This possesses three log levels which can be set under Global settings > Notifications > Notification Spooler Configuration with the Verbosity of logging parameter. The default is Normal logging (only startup, shutdown and errors. In the middle level, Verbose logging (i.e. spooled notifications), the processing of the spool files can be seen:

~/var/log/mknotifyd.log
2025-04-09 08:47:37,928 [15] [cmk.mknotifyd] processing spoolfile: /omd/sites/mysite/var/check_mk/notify/spool/dad64e2e-b3ac-4493-9490-8be969a96d8d
2025-04-09 08:47:37,928 [20] [cmk.mknotifyd] running cmk --notify --log-to-stdout spoolfile /omd/sites/mysite/var/check_mk/notify/spool/dad64e2e-b3ac-4493-9490-8be969a96d8d
2025-04-09 08:47:39,848 [20] [cmk.mknotifyd] got exit code 0
2025-04-09 08:47:39,850 [20] [cmk.mknotifyd] processing spoolfile dad64e2e-b3ac-4493-9490-8be969a96d8d successful: success 250 - b'2.4.0 Ok: queued as 1D4FF7F58F9'
2025-04-09 08:47:39,850 [20] [cmk.mknotifyd] sending command LOG;SERVICE NOTIFICATION RESULT: hh;mysrv;CPU load;OK;mail;success 250 - b'2.4.0 Ok: queued as 1D4FF7F58F9';success 250 - b'2.0.0 Ok: queued as 1D4FF7F58F9'

6. Traceable delivery per SMTP

6.1. Email is not reliable

CEE Monitoring is only useful when one can rely on it. This requires that notifications are received reliably and promptly. Unfortunately email delivery is not completely ideal however. The dispatch is usually processed by passing the email to the local SMTP server. This attempts to deliver the email autonomously and asynchronously.

With a temporary error (e.g., a case where the receiving SMTP server is not reachable) the email will be put into a queue and a later a new attempt will be made. This ‘later’ will as a rule be after 15-30 minutes. By then the notification could be far too late!

If the email really can’t be delivered the SMTP server creates a nice error message in its log file and attempts to generate an error email to the ‘sender’. But the monitoring system is not a real sender and also cannot receive emails. It follows that such errors simply disappear and notifications are then absent.

6.2. Using SMTP on a direct connection enables error analysis

The commercial editions provide the possibility of a traceable delivery via SMTP. This it intentionally does without the help of the local mail server. Instead Checkmk itself sends the email to your smarthost via SMTP, and then it evaluates the SMTP response itself.

In this way, not only are SMTP errors treated intelligently, but a correct delivery is also precisely documented. It is a bit like a registered letter: Checkmk receives a receipt from the SMTP smarthost (receiving server) verifying that the email has been accepted — including a mail ID.

The practical process for setting up notifications with traceable delivery via SMTP is described in global notification rules and in personal notification rules.

6.3. SMS and other notification methods

A synchronous delivery including error messages and traceability has to date only been implemented for HTML emails. How one can return an error status in a self-written notification script can be found in the chapter on writing your own scripts.

7. Notifications in distributed systems

In distributed environments — i.e., those with more than a single Checkmk site — the question arises of what to do with notifications generated on remote sites. In such a situation there are basically two possibilities:

  1. Local delivery

  2. Central delivery on the central site (commercial editions only)

Detailed information on this subject can be found in the article on distributed monitoring.

8. Notification scripts

8.1. The principle

Notification can occur in manifold and individual ways. Typical examples are:

  • Transfer of notifications to a ticket, or external notification system

  • The sending of an SMS over various internet services

  • Automated telephone calls

  • Forwarding to a higher, umbrella monitoring system

For this reason Checkmk provides a very simple interface which enables you to write your own notification scripts. These can be written in any Linux-supported programming language — even though Shell, Perl and Python together have 95 % of the ‘market’.

The standard scripts included with Checkmk can be found in ~/share/check_mk/notifications. This directory is a component of the software and is not intended to be changed. Instead, save your own scripts in ~/local/share/check_mk/notifications. Ensure that your scripts are executable (chmod +x). They will then be found automatically and made available for selection in the notification rules.

Should you wish to customize a standard script, simply copy it from ~/share/check_mk/notifications to ~/local/share/check_mk/notifications and there make your changes in the copy. If you retain the original name, your script will be substituted automatically for the standard version and no changes will need to be made to the existing notification rules.

Some more sample scripts are included with the software in ~/share/doc/check_mk/treasures/notifications. You can use these as templates for customization. The configuration will generally take place directly in the script — tips covering this can be found there in the comments.

In the case of a notification your script will be called up with the site user’s permissions. In environment variables, those that begin with NOTIFY_, the script will receive all of the information about the affected host/service, the monitoring event, the contacts to be notified, and the parameters specified in the notification rule.

Texts that the script writes to the standard output (with print, echo, etc.), appear in the notification module’s log file ~/var/log/notify.log.

8.2. Traceable notifications

Notification scripts have the option of using an exit code to communicate whether a replicable or final error has occurred:

Exit code Description

0

The script was successfully executed.

1

A temporary error has occurred. The execution should after a short wait be repeatedly reattempted, up until the configured maximum number of attempts has been reached. Example: an HTTP connection cannot be established with an SMS service.

2 and higher

A final error has occurred. The notification will not be reattempted. A notification error will be displayed in the GUI. The error will be displayed in the host’s/service’s history. Example: the SMS service records an 'invalid authentication' error.

Additionally, in all cases the standard output from the notification script, together with the status will be entered into the host’s/service’s notification history and will therefore be visible in the GUI.

Important: Traceable notifications are not available for bulk notifications!

CEE The treatment of notification errors from the user’s point of view will be explained in the chapter on traceable delivery via SMTP.

8.3. A simple script example

As an example you can create a script that writes all of the information about the notification to a file. The coding language is the Bash Linux shell:

~/local/share/check_mk/notifications/foobar
#!/bin/bash
# Foobar Teleprompter

env | grep NOTIFY_ | sort > $OMD_ROOT/tmp/foobar.out
echo "Successfully written $OMD_ROOT/tmp/foobar.out"
exit 0

Then make the script executable:

OMD[mysite]:~$ chmod +x local/share/check_mk/notifications/foobar

Here are a couple of explanations concerning the script:

  • In the first line is a #! and the path to the script language’s interpreter (here /bin/bash).

  • In the second line after the comment character # is a title for the script. As a rule this will be shown when selecting the notification method.

  • The env command will output all environment variables received by the script.

  • With grep NOTIFY_ the Checkmk variables will be filtered out …​

  • … and sorted alphabetically with sort.

  • > $OMD_ROOT/tmp/foobar.out writes the result to the ~/tmp/foobar.out file within the site directory.

  • The exit 0 would actually be superfluous in this location since the shell always takes the exit code from the last command. Here this is echo and is always successful — but explicit is always better.

8.4. Testing the example script

So that the script will be used you must define it as a method in a notification rule. Self-written scripts have no parameter declaration, therefore all of the checkboxes such as those offered, for example, in the HTML Email method, will be missing. Instead you can enter a list of texts as parameters that can be available as NOTIFY_PARAMETER_1, NOTIFY_PARAMETER_2, etc, to the script. For a test provide the parameters Fröhn, Klabuster and Feinbein:

Rule with selection of sample script as notification method.

Now to test, set the service CPU load on the host myserver to CRIT — with the Fake check results command. In the log file of the notification module ~/var/log/notify.log you then see the execution of the script including parameters and the generated spool file.:

~/var/log/notify.log
2021-08-25 13:01:23,887 [20] [cmk.base.notify] Executing 1 notifications:
2021-08-25 13:01:23,887 [20] [cmk.base.notify]   * notifying hh via foobar, parameters: Fröhn, Klabuster, Feinbein, bulk: no
2021-08-25 13:01:23,887 [20] [cmk.base.notify] Creating spoolfile: /omd/sites/mysite/var/check_mk/notify/spool/e1b5398c-6920-445a-888e-f17e7633de60

The file ~/tmp/foobar.out will now contain an alphabetic list of all Checkmk environment variables that include information concerning the notification. Here you can orient yourself with which values are available to your script. Here are the first ten lines:

OMD[mysite]:~$ head tmp/foobar.out
NOTIFY_ALERTHANDLERNAME=debug
NOTIFY_ALERTHANDLEROUTPUT=Arguments:
NOTIFY_ALERTHANDLERSHORTSTATE=OK
NOTIFY_ALERTHANDLERSTATE=OK
NOTIFY_CONTACTALIAS=Harry Hirsch
NOTIFY_CONTACTEMAIL=harryhirsch@example.com
NOTIFY_CONTACTNAME=hh
NOTIFY_CONTACTPAGER=
NOTIFY_CONTACTS=hh
NOTIFY_DATE=2021-08-25

The parameters can also be found:

OMD[mysite]:~$ grep PARAMETER tmp/foobar.out
NOTIFY_PARAMETERS=Fröhn Klabuster Feinbein
NOTIFY_PARAMETER_1=Fröhn
NOTIFY_PARAMETER_2=Klabuster
NOTIFY_PARAMETER_3=Feinbein

8.5. Environment variables

In the above example you have seen a number of environment variables that will be passed to the script. Precisely which variables are available depends on the type of notification, the Checkmk version and edition and the monitoring core used (CMC or Nagios). Alongside the trick with the env command there are two further ways of getting a complete list of all variables:

  • Changing up the log level for ~/var/log/notify.log via Global settings > Notifications > Notification log level.

  • For notifications per HTML Email there is a checkbox Information to be displayed in the email body with the Complete variable list (for testing) option.

Below is a list of the most important variables:

Environment variable Description

OMD_ROOT

Home directory of the site, e.g., /omd/sites/mysite.

OMD_SITE

Site name, e.g., mysite.

NOTIFY_WHAT

For host notifications, the word HOST, otherwise SERVICE. With these you can make your script so intelligent that it logs useful information in both cases.

NOTIFY_CONTACTNAME

User name (login) of the contact.

NOTIFY_CONTACTEMAIL

The email address of the contact.

NOTIFY_CONTACTPAGER

Entry in the Pager field in the contact’s user profile. Since the field is not generally reserved for a specific purpose, you can simply use it for each user in order to save information required for notifications.

NOTIFY_DATE

Date of the notification in ISO-8601-Format, e.g., 2021-08-25.

NOTIFY_LONGDATETIME

Date and time in the non-localized Linux system’s default display, e.g., Wed Aug 25 15:18:58 CEST 2021.

NOTIFY_SHORTDATETIME

Date and time in ISO-Format, e.g. 2021-08-25 15:18:58

NOTIFY_HOSTNAME

Name of the affected host.

NOTIFY_HOSTOUTPUT

Output of the host check’s check plug-in, e.g., Packet received via smart PING. This output is only relevant for host notifications, but is also present in service notifications.

NOTIFY_HOSTSTATE

One of the words: UP, DOWN or UNREACH

NOTIFY_NOTIFICATIONTYPE

Notification type as described in the introduction to this article. This will be expressed by one of the following words:
PROBLEM: Normal host or service problem
RECOVERY: Host/Service is again UP / OK
ACKNOWLEDGEMENT (…​): Acknowledgment of a problem
FLAPPINGSTART: Host/service has begun flapping
FLAPPINGSTOP:- Flapping has ended
DOWNTIMESTART: Start of a scheduled downtime
DOWNTIMEEND: Normal end of a downtime
DOWNTIMECANCELLED: Premature interruption of a downtime
CUSTOM: Notification issued by a manual command
ALERTHANDLER (…​): Alert handler execution (only commercial editions)
For types with (…​), the brackets contain additional information on the notification’s type.

NOTIFY_PARAMETERS

All of the script’s parameters separated by blanks.

NOTIFY_PARAMETER_1

The script’s first parameter.

NOTIFY_PARAMETER_2

The script’s second parameter, etc.

NOTIFY_SERVICEDESC

Name of the service concerned. This variable is not present in host notifications.

NOTIFY_SERVICEOUTPUT

Output of the service check’s check plug-in (not for host notifications)

NOTIFY_SERVICESTATE

One of the words: OK, WARN, CRIT or UNKNOWN

8.6. Bulk notifications

If your script should support bulk notifications, it will need to be specially prepared, since the script must deliver multiple notifications simultaneously. For this reason a delivery using environment variables also doesn’t function practicably.

Give your script a name in the third line in the header as below — the notification module will then send the notifications through the standard input:

~/local/share/check_mk/notifications/mybulk
#!/bin/bash
# My Bulk Notification
# Bulk: yes

Through the standard input the script will receive blocks of variables. Each line has the form: NAME=VALUE. Blocks are separated by blank lines. The ASCII character with the code 1 (\a) is used to represent newlines within the text.

The first block contains a list of general variables (e.g., call parameters). Each subsequent block assembles the variables into a notification.

The best recommendation is to try it yourself with a simple test that writes the complete data to a file so that you can see how the data is sent. You can use the following notification script for this purpose:

~/local/share/check_mk/notifications/mybulk
#!/bin/bash
# My Bulk Notification
# Bulk: yes

cat > $OMD_ROOT/tmp/mybulktest.out

Test the script as described above and additionally activate the Notification Bulking in the notification rule.

8.7. Supplied notification scripts

As delivered, Checkmk already provides a whole range of scripts for connecting to popular and widely used instant messaging services, incident management platforms and ticket systems. You can find out how to use these scripts in the following articles:

9. Files and directories

9.1. Paths of Checkmk

File path Function

~/var/log/cmc.log

The CMC log file. If notification debugging is activated, here you will find precise information as to why notifications were, or were not generated.

~/var/log/notify.log

The notification module’s log file.

~/var/log/mkotifyd.log

The notification spooler’s log file.

~/var/log/mkotifyd.state

The current state of the notification spooler. This is primarily relevant for notifications in distributed environments.

~/var/nagios/debug.log

The Nagios debug log file. Switch on the debug messages in ~/etc/nagios/nagios.d/logging.cfg using the debug_level variable.

~/var/check_mk/notify/spool/

Storage location for the spool files to be processed by the notification spooler.

~/var/check_mk/notify/deferred/

With temporary errors the notification spooler moves the files to here and retries after a couple of minutes.

~/var/check_mk/notify/corrupted/

Defective spool files will be moved to here.

~/share/check_mk/notifications

Notification scripts supplied as standard with Checkmk. Make no changes here.

~/local/share/check_mk/notifications

Storage location for your custom notification scripts. If you wish to customize a standard script, copy it from ~/share/check_mk/notifications to here, and retain the original file name.

~/share/doc/check_mk/treasures/notifications

Other notification scripts which you can slightly customize and use.

9.2. SMTP server log files

The SMTP server’s log files are system files and their absolute paths are listed here below. Precisely where the log files are stored will depend on your Linux distribution.

Path Function

/var/log/mail.log

SMTP-server’s log file under Debian and Ubuntu

/var/log/mail

SMTP-server’s log file under SUSE Linux Enterprise Server (SLES)

/var/log/maillog

SMTP-server’s log file under Red Hat Enterprise Linux (RHEL)

On this page