1. Introduction

Checkmk provides comprehensive options for the monitoring of Oracle databases. With the plug-in you can not only retrieve a database’s tablespaces or its active sessions, but also many other types of performance data. A complete list of the monitoring options can be found in our Catalogue of Check Plug-ins. We extend this plug-in regularly, so it is always worth looking in the catalogue. Among others, Checkmk can monitor the following values (some of the plug-ins only from Version 1.6.0):

To be able to monitor the databases only the plug-in is required in addition to the agent on the database server. The Linux, AIX, Solaris, HP-UX and Windows operating systems are currently supported. No further additional software will be required for a monitoring, either on the Checkmk-Instance or on the database server.

Many of the steps to set up monitoring are the same for both Linux and Windows. For this reason, we will first describe the general steps, and then the steps specific to Windows and Linux. Please note that a setup using the Agent Bakery is currently only available for Linux, Solaris and AIX.

2. General steps for an initial setup

2.1. Create a user

In principle, the first setup is quick and requires only three steps. The first step, of course, is to have a user who is also allowed to query the databases. Provided that you are not using RAC, create a user for each database to be monitored. How to access an instance differs depending on the installed operating system. For Linux, you can do this for example, by always first setting the relevant instance as an environment variable for which a user is to be created. Normally you first switch to the oracle user, but this can differ depending on the setup:

root@linux# su - oracle
UP(oracle@myhost):export ORACLE_SID=MYINST1

Then log on to the instance and create a user for the monitoring. To get all of the data, the following permissions are required. In the following example, the newly-created user is named checkmk. You can also specify any other desired name:

UP(sqlplus>):create user check_mk identified by myPassword;
UP(sqlplus>):grant select_catalog_role to check_mk;
UP(sqlplus>):grant create session to check_mk;
UP(sqlplus>):connect check_mk/myPassword

You can find out exactly how the login to a specific instance functions in the documentation from Oracle.

Multitenant databases

You can also configure the login for multitenant databases. This is usually performed in the config using a special user and with the prefix C##.

2.2. Create the configuration file and user

After you have created a user, the next step is to enable the agent plug-in to later receive this information. The simplest way is for you to define the same login data for all instances, and set a default in the configuration. Now create a new configuration file on the Oracle host for Linux, AIX, Solaris or for Windows. In the following example the file for unix-like systems can be seen:

# Syntax:

For Windows this procedure looks very similar. There you set the variable in a powershell script:

# Syntax:
$DBUSER = @("checkmk","myPassword")

The standard user is all that the plug-in really requires. All other options that you can set in unix-like systems or under Windows are optional.

2.3. Getting the script and configuration on to the host

Now that you have created a user and have saved it in the configuration file, you must also place the plug-in in the the correct agent directory. The correct file path for agent plug-ins is described in the articles on Linux and Windows. Solaris and AIX usually use the same path as that under Linux. Under Linux — and also under AIX or Solaris — be sure to make the plug-in executable, and correct it if necessary:

root@linux# cd /usr/lib/check_mk_agent/plugins/
root@linux# ls -lA
-rw-r--r-- 1 root root 120808 Jan 25 11:29 mk_oracle
root@linux# chmod +x mk_oracle
root@linux# ls -lA
-rwxr-xr-x 1 root root 120808 Jan 25 11:29 mk_oracle

2.4. Using the Oracle-Wallet

Alternatively, you can also use the Oracle Wallet to define the user directly and with a password in a configuration file. This has the advantage that you no longer need to store the access data in an unencrypted form on the Checkmk server and on the Oracle host. So even if you have defined the configuration file’s rights on the Oracle host to suit, the access data has nevertheless been removed from the server and is located on the Checkmk server.

The Oracle Wallet in turn stores the encrypted access data on the host to be monitored so that they can only be used, but no login data has to be made known explicitly. Checkmk can thus use this wallet, but in principle the access data needs only be known to the database administrator (DBA). Create — or the DBA can create — a wallet on the appropriate server:

root@linux# mkstore -wrl /etc/check_mk/oracle_wallet -create

The plug-in will access this file later whenever a connection to an instance is to be established. So that the necessary user data can also be found, you must enter this once in the wallet. In the following example we thus add the user checkmk for the instance MYINST1:

root@linux# mkstore -wrl /etc/check_mk/oracle_wallet -createCredential MYINST1 check_mk myPassword

For the plug-in to know where to look for the wallet, it must find two files. The first file is the sqlnet.ora in which the info for the wallet’s location can be found. The second file — tnsnames.ora — defines the address of the instance and can then contact the instance via its alias. So that the agent plug-in can access these files you can either specify the path under Linux, Solaris and AIX using the TNSALIAS extended variable. This is especially useful if the files already exist. Alternatively, you can create them explicitly. Windows even requires that you specify them manually.

First create the sqlnet.ora file. The plug-in alternatively searches in this file for the login data, so the correct file path to the wallet file you just created must be entered here. Make sure that you set the SQLNET.WALLET_OVERRIDE parameter to TRUE:

LOG_DIRECTORY_CLIENT = /var/log/check_mk/oracle_client

   (METHOD_DATA = (DIRECTORY=/etc/check_mk/oracle_wallet))

Now the plug-in knows which credentials should be used. So that it also accesses the correct address, create tnsnames.ora as the second file. You can find examples of a configuration on the Checkmk-Server, and also on the Oracle host. The exact syntax can be found in the Oracle documentation, but as an example the file could look like this:

    (ADDRESS = (PROTOCOL = TCP)(HOST = = 1521))

With this last step you have created the prerequisites for getting the access data from the agent plug-in’s configuration file. Instead, you simply enter a / (slash):


You can of course add further access data to the wallet at a later date. The tnsnames.ora file must then simply be amended as necessary.

3. Linux, AIX, Solaris

3.1. Plug-in and configuration file paths

Under unix-type systems Checkmk uses a uniform plug-in. On the one hand, this reduces maintenance effort, since SQLs are not duplicated, and on the other hand you only need to pay attention to a single plug-in. On all supported systems the file paths for the agents are the same or very similar. Specifically, you need the following directories, if you have not changed them:

OSPlug-in File PathConfiguration File Path

Linux, Solaris, HP-UX






3.2. Further options

In general setup you have already learned about the first variables for getting monitoring data from their Oracle instances. However, depending on the application scenario, you will quickly need further possibilities for better, individual control of the monitoring per instance.

Extended user configuration

With the standard login you can use regular or maybe even all instances of a database. However, there are special cases in which you require individual access data for specific instances. In the configuration file you therefore have the following three options for specifying users:



The default if no individual access data has been defined for the database instance.


Access data for a specific database instance — in this case for the instance MYINST1. The login data is only used for this SID.


Special access data for Automatic Storage Management (ASM).

Important: For an ASM only one login at a time can be specified.

These variables additionally allow even more options, except for username and password. You can also determine whether the user is a SYSDBA or SYSASM, on which combination of address and port the instance listens to, and even which TNSALIAS should be used. However, these specifications are always — unlike user and password — optional. In addition to the above example, a configuration can look like this:

# Syntax



A few explanations for the above example:

  • You can define any number of individual access data. These are always preferred to the standard.

  • Each option is separated from the others by a : (colon).

  • If an optional field is omitted in the middle of the string, a colon must be coded. See DBUSER_MYINST2 where no special role is specified.

  • If after a certain point, no more optional fields are needed, you can omit the colons. See ASMUSER, where neither the host nor the port is specified.

Including or excluding instances

In some cases you may not want to include particular instances in the monitoring. This may be because it is only a playground for developers, or for other reasons. To make the configuration as simple as possible in individual situations, you have various options to entirely or partially exclude one or more instances:



Here you can determine which instances are to be monitored — all other instances will then be ignored. Of course the instances will still only be monitored if they are found by the plug-in. This option is very useful if the number of instances to be monitored is smaller than the number of instances to be ignored.


All instances that are coded here separated by spaces are completely ignored and not monitored. This option is very suitable if the number of instances to be ignored is smaller than the number to be monitored.


With this option you can partially exclude instances. Again, you can use the value ALL to exclude all sections, and thus do the same as if you were adding them to SKIP_SIDS. However you can also disable only certain sections of certain instances, and thus only monitor a subset of what is possible. Important: An ASM (+ASM) cannot be completely disabled with this option.

You will have already guessed: The order in which these options are processed determines the result. The entries are in fact processed per instance exactly in sequence as specified. Therefore if the variable ONLY_SIDS is set, SKIP_SIDS is no longer evaluated, and it is no longer possible to determine whether a variable EXCLUDE_ has been specified as ALL for this instance. If ONLY_SIDS is not set, the system proceeds according to the sequence. In case of doubt — that is, as the default behavior — the instance will also then be monitored accordingly.

Below is an example in which all variables are set and what the behavior is like:

EXCLUDE_INST2='tablespaces rman'

Since the positive list from the first line always has priority, the second and third lines are accordingly no longer evaluated. Only the fourth (=last) line will be considered at a later date, since the instance is only to be partially evaluated here.

In practice, it only makes sense to use one of the variables to determine the number of instances to be monitored.

Determining the data to be fetched

As you learned in the previous section, it is not only possible to disable instances completely, but also to only partially monitor them. The operational purposes are also diverse, and it is especially practical when it is undesirable to have certain long-running sections to be included in everything, or only basic information from test instances is required. To restrict sections globally, set the corresponding variables directly — to restrict only certain instances you can slot in the EXCLUDE__{SID}_ variable which you have already learned about in the previous section. The global variables are:



Sections that are to be queried synchronously, i.e. every time the agent runs. Since the query interval is 60 seconds by default, the SQLs used must be run through correspondingly fast. If the variable is not specified, all sections are queried.


Sections that are to be queried asynchronously, i.e. only every x minutes. How long the data remains valid is determined by the CACHE_MAXAGE variable, below in this table.


Here the same mechanism for ASM sections applies as that used by the SYNC_SECTIONS variable.


Here the same mechanism for ASM sections applies as that used by the ASYNC_SECTIONS variable.


This variable is used to determine how long asynchronously retrieved data remains valid. If the variable’s value is not specified, a default of 600 seconds (=10 minutes) is used.

The mechanism therefore allows a value specified in the configuration file to predefine a default and, if necessary, to adjust this value again for individual instances as required. A configuration could then look like this, for example:

# SYNC_SECTIONS="instance sessions logswitches undostat recovery_area processes recovery_status longactivesessions dataguard_stats performance locks"
# ASYNC_SECTIONS="tablespaces rman jobs ts_quotas resumable"
# SYNC_ASM_SECTIONS="instance processes"
# ASYNC_ASM_SECTIONS="asm_diskgroup"

ASYNC_SECTIONS='tablespaces jobs rman resumable'


EXCLUDE_INST1='undostat locks'

As can be seen in the example, the ASM instances will only query the instance section, and additionally, on all regular instances a minimal set of default asynchronous sections has been specified. Additionally, on the INST1 instance the synchronous sections undostat and locks will be omitted. Since the synchronous sections are not explicitly specified, all synchronous sections are retrieved from all other instances. INST2 in turn queries only three of the four asynchronous sections, since jobs was additionally excluded. And finally, the cache of 10 minutes is reduced to 5 minutes (=300 seconds), as this is sufficient time to get all data.

Important: If you define in the configuration file which sections you would like to retrieve, and by which method — you can also alter a normally asynchronous section to a synchronous section — you must specify all sections which should be executed in the respective area. For example, if you only want locks from the synchronous query, specify the entire synchronous list and simply omit locks:

# Just exclude 'locks' from sync sections:
SYNC_SECTIONS='instance sessions logswitches undostat recovery_area processes recovery_status longactivesessions dataguard_stats performance'

The same applies for the other three variables in which the sections can be determined.


Coming soon. This can be used to add a post or prefix to an instance’s TNSALIAS.

You can additionally set the TNSADMIN to specify the path to existing sqlnet.ora and tnsnames.ora.

3.3. Monitoring remote databases

Under unix-type systems, not only can you retrieve local instances, but also connect to and retrieve from remote instances. This, for example, is advantageous if you do not have access to the underlying system, but still want to monitor the database. To monitor remote databases, there are, however, the following requirements:

  • The Linux AIO access library is installed. Under RHEL/CentOS the package is called libaio.

  • The Instant Client for Oracle Database is installed.

  • SQLPlus already exists in the installation, or must be installed as an extension package to the client.

As a rule, the conditions are already fulfilled when an Oracle installation is present on the host on which the plug-in is to be executed. Otherwise, use the appropriate packages to do so.

In order for the plug-in to connect to the remote database, first store the access data in the configuration file. This procedure is similar to that you already know from DBUSER. However, there are a number of additional — mandatory — specifications:

# Syntax:



In the example, two remote instances are being configured. So that each text line is unique, each variable is given an ID. These can be freely-chosen — they just have to be unique for each configuration file. As you have probably already noticed, for the values after the port specification there are other values that have been newly-set. These are partly optional, and partly necessary to establish a connection.

The first additional value used (=PIGGYBACKHOST) in the MYINST3 instance is myOracleHost. This specification assigns the results from the query to the specified host. If this is present as a host in Checkmk, the new services will be installed there accordingly instead of on the host where the plug-in is running. You do not see this specification on the second instance MYINST1 — the assignment to another host is optional.

The second new value (=SID) is the instance name. Since the plug-in on the remote host cannot see what instances are running there, a configuration line must be specified for each remote instance — this value is therefore required and thus not optional.

The third value (=VERSION) is also required, and is due to the fact that much meta-information is only available if you are directly on the host. The version specification therefore also cannot be omitted, and it determines which SQLs can be passed to the instance. In the example, both instances use version 11.2.

The fourth and last value (=TNSALIAS) is again optional and can be used if you access the remote instance via the wallet or the LDAP/Active Directory. In the event that the instance responds only to a specific TNS alias, you can specify this alias here.

To also find the program sqlplus, use the REMOTE_ORACLE_HOME variable to specify where the Oracle client is located on the host that runs the plug-in. Only then are all resources available that are required for the queries.

Important: When querying remote instances, there are some restrictions and special features:

  • You cannot — understandably — exclude remote instances with SKIP_SIDS, and in return you do not need to include them in ONLY_SIDS.

  • Instances with the same name must not be assigned to the same host. This is especially relevant if you are fetching instances from multiple remote and/or local hosts where identical names are used.

3.4. Setting up via the Agent Bakery

CEE The setup can be greatly simplified under Linux, AIX, Solaris and HP-UX with the Agent Bakery, because syntax errors in the configuration files are avoided, and adaptations to changing environments can be implemented more easily. The main difference compared to a manual configuration is that you only need to work on the Oracle host at the command line if you want to make special Oracle-specific configurations.

Nevertheless, you cannot configure all functions of the plug-in using the Agent Bakery. On the one hand, when you are dealing with obsolete configuration options that are only available for compatibility reasons. These can then be replaced with the current options accordingly. On the other hand, if these are functions that require a major intervention and require clear expertise. Accordingly, the Custom SQLs cannot be configured directly in the Agent Bakery.

With WATO > Monitoring Agents > Rules > ORACLE Databases (Linux, Solaris, AIX) you can create a rule. Here you will find all options available to you for configuring the plug-in:

monitoring oracle bakery ruleset

Configuring users

In the Agent Bakery you also have the option of creating standard users and creating exceptions for specific instances. The options — separated by a colon — that you find in the configuration file, you will find here as individual options which you can then fill out as needed. In the simplest configuration it will look something like this:

monitoring oracle bakery login1

Of course you can also use the wallet here by selecting Authentication method in the dropdown menu, and simply changing to Use manually created ORACLE password wallet.

The configuration for an ASM is done analogously via the Login for ASM option, and the exceptions for specific instances can be found at Login for selected databases.

Advanced options

The following table lists the remaining options in the ORACLE Databases (Linux, Solaris, AIX) rule set, along with a reference to where to find a description of the option:


Instances to monitor

This option combines several options of the configuration file, allowing you to include or exclude instances.

Add pre or postfix to TNSALIASes

This option allows you to extend the TNSALIAS either globally or for a specific instance.

Sections - data to collect

All available sections are listed under this option, and these can be configured individually at the global level. They therefore correspond to both variables SYNC_SECTIONS and ASYNC_SECTIONS, as well as their counterpart in the case of an ASM from the section where the data to be fetched is to be found.

Exclude some sections on certain instances

If you do not want to use EXCLUDE_{SID} to exclude the entire instance, but only to exclude a few sections, you can do so using this option. A more detailed description can be found in the options for the Instances and Sections.

Cache age for background checks

Specify here how long asynchronous sections should remain valid. The default value is 600 seconds (=10 minutes).

Sqlnet Send timeout

This option is added to sqlnet.ora and sets a timeout for the instance itself.

Remote instances

Configure remote instances with this option. It contains all elements of the configuration that you already know. To specify the variable’s ID, via Unique ID you can select from various options. These only affect the specified ID and must therefore be unique.

ORACLE_HOME to use for remote access

Here you can determine where the agent plug-in can find the sqlplus program. You must then specify this program if you want to monitor a remote instance, but sqlplus cannot be found via the environment variables.

TNS_ADMIN to use for sqlnet.ora and tnsnames.ora

If you have already created the corresponding files in another location, you can use this option — as described above — to provide the file path to the files. Otherwise, this is searched for under /etc/check_mk/.

4. Windows

4.1. Plug-in and configuration paths

On Windows, Checkmk uses Powershell as the scripting language. The functionality is similar to the plug-in under unix-type systems, but contains only a part of this. To use the plug-in under Windows you need the following paths, if you have not changed them.

OSPlug-in pathConfiguration path




Windows (Legacy Agent)

C:\Program Files (x86)\check_mk\plugins\

C:\Program Files (x86)\check_mk\config\

Note: Since the Legacy Agent is only needed in very rare, exceptional cases, it is not considered in the following description. We therefore always refer to the current agent architecture here.

To activate the mk_oracle.ps1 plug-in, either copy the plug-in from C:\Program Files (x86)\checkmk\service\plugins\ into the plug-in directory described above. Alternatively, you can specify the file in the installation path for the agent configuration file.

4.2. Special features under Windows and Powershell

Windows normally prevents the execution of scripts if they have not been signed. You can now work around this problem very easily by modifying the policies for running Powershell scripts for the user who is running the checkmk agent:

UP(C:\ProgramData\checkmk\agent\>):Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine
UP(C:\ProgramData\checkmk\agent\>):Get-ExecutionPolicy -Scope LocalMachine

This option is useful if for a short time you want to test a script or the general functionality of the checkmk agent. To avoid compromising the security of your system, revert this setting on production servers after testing has been completed:

UP(C:\Program\checkmk\agent\>):Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine
UP(C:\Program\checkmk\agent\>):Get-ExecutionPolicy -Scope LocalMachine

Understandably, you probably don’t want to change the guidelines every 60 seconds. You therefore set a permanent exception only for the relevant scripts. Note that the configuration script is also added to the exceptions here. For easy readability, in this example the output has been completely omitted:

UP(C:\ProgramData\checkmk\agent\>):Unblock-File -Path .\plugins\mk_oracle.ps1
UP(C:\ProgramData\checkmk\agent\>):Unblock-File -Path .\config\mk_oracle_cfg.ps1

4.3. Advanced options

In the general setup you have already learned about the first variables to get monitoring data from your Oracle instances. Further options that are also available to you under Windows are described in the following sections.

Advanced user configuration

Enabling and disabling instances

Determining data to be fetched

4.4. Monitoring remote databases

Monitoring remote databases is currently not possible using the Windows plug-in. Therefore if you want to monitor remote databases, you need a host with a compatible unix-type operating system.

4.5. Setup via the Agent Bakery

CEE A setup via the Agent Bakery is currently only available for unix-type operating systems.

5. Clustered instances

5.1. Standby databases

Oracle supports so-called standby databases which can perform specified tasks, and which are usually simply copies of production or primary databases. These database concepts also require special monitoring mechanisms. You can find out more about these mechanisms in the following sections.

With Active Data Guard (ADG)

Once you have licensed and activated ADG, you do not need to make any changes to the configuration of the agent plug-in, since you can read from a standby instance at any time without having to interrupt the synchronization with the primary instance.

Without Active Data Guard (DG)

If the instances do not have ADG, the user with which the monitoring data from the standby instances is to be fetched needs the sysdba role. This authorization enables the user to fetch at least part of the data, even if the primary instance fails and the instance on the standby server has not yet been changed from MOUNTED to OPEN.

Assign the authorization to the user who is authorised to retrieve the data from the instance(s). Important: How this works may differ from this example. Here the role is assigned to the user as created in the example above:

sqlplus> grant sysdba to checkmk;

To enable the data to be queried by the agent plug-in on the standby server in case of an error, create the user on the primary instance, and then copy the password file to the standby server. In the plugin’s configuration file, set the role of the user to sysdba:

# Syntax:

Note that specifying a host, port, or TNS alias is optional, and can be omitted as always. In addition, the agent plug-in must of course be installed on both the primary Oracle host and the standby hosts.

Setting up clustered services

On the monitored host you need to perform certain actions, and on the Checkmk side it is also necessary that you customize the services and assign them to a cluster host — regardless of whether you use ADG or DG. The corresponding check plug-ins have already been prepared to the extent that they can also be configured as Clustered Services. Create a cluster host in Checkmk, and to this cluster host assign the individual Oracle hosts to which the primary and the standalone instances are assigned. Then assign the following services to this cluster host:

  • ORA .* RMAN Backup

  • ORA.* Job

  • ORA .*Tablespaces

After this you will no longer need to worry about which instance the data comes from, and will have ensured seamless monitoring of the above-mentioned services — even in the event of a switchover of the primary instance.

5.2. Real Application Cluster (RAC)

Since there is a central storage for the data in a RAC, it is sufficient here to create the user for the agent plug-in only once. Only the agent plug-in needs to be installed and configured on each node of the Oracle cluster.

Important: Always set up the cluster’s nodes yourself, and do not query the SCAN listener. This is the only way to ensure that access via the plug-in works correctly.

Setting up clustered services

Even with an RAC, it makes sense to combine specified services in a cluster host. In addition to the services that you assign to the cluster host under an (Active) Data Guard, you also assign the following services to the cluster host to ensure seamless monitoring in the event of switchovers:

  • ASM.* Diskgroup

  • ORA .* Recovery Area

6. Cluster Ready Services (CRS)

7. Monitoring own SQLs (Custom SQLs)

7.1. Why specify own SQLs?

With its official plug-in Checkmk already provides a large number of SQLs with which you can monitor your database instances. To make these suitable for the widest possible range of technical and content requirements, they are of course kept in a very generalised form.

In order to be able to meet the individual requirements of each company for the monitoring of a specific database, from Version 1.6.0 onwards Checkmk provides the possibility of defining your own SQLs and having them queried with the mk_oracle plug-in. These SQLs are then automatically recognized and monitored as own services in the web interface.

Important: It is only possible to query own SQLs under Linux, AIX and Solaris. Under Windows this option is not available.

7.2. Simple integration of own SQLs

Writing SQLs to be queried

The easiest way to connect such SQLs is to use the Custom SQLs for Oracle DBs check plug-in. To do this, first create the file mySQL.sql in the agent configuration directory of the host where the SQL is to be executed. The following is a dummy that illustrates the syntax:

/*Syntax help in comments. The first word is always the key word and ends with a ":"*/

/*details:Text to display in the service detail output*/
prompt details: Some details for the service output;

prompt perfdata:MyMetricName1=10;15;20;30 MyMetricName2=16;15;20;30;
prompt perfdata:MyMetricName3=21;15;20;30 MyMetricName4=15;15;20;30;

/*long:Text to display in the long output of the service*/
prompt long: Here comes some long output for the service;
prompt long: Here comes some more long output for the service;

/*exit:Status of the service as a number*/
prompt exit:2;

The example shows on the one hand that you can define any number of statements in such a file. On the other hand, the syntax is very similar to that of a local check, especially with regard to performance values. In detail, this syntax is much more powerful here, because you can generate multiline output, and then process it on the Checkmk server as a service. In principle, all lines are optional and do not need to be filled. The possible keywords are in detail:

  • details: Here you can determine what should be output in the the generated service’s Status Detail. This line is introduced with the keyword and a colon. The rest of the line is the output.

  • perfdata: Performance data is passed with this keyword. Within a line, you can create any number of metrics — each separated by a space. You can also distribute the output of the metrics over several lines. Just always start with the keyword perfdata:.

  • long: If the service should have a Long Output, you can specify it here. You can also use this keyword multiple times to create multiple lines in the Long Output.

  • exit: If the output should have a certain status, you can specify this here. The known assignments 0,1,2,3 are available for the statuses OK, WARN, CRIT, UNKNOWN.

Note: You do not have to define the keyword elapsed manually. It is generated automatically at runtime to check how long the commands you defined took to process.

Configuring the mk_oracle-Plug-in

Now that you have defined your first, very simple SQL, make it known to the plug-in. This is done via the familiar configuration file, which you can modify accordingly:


mycustomsection1 () {

With the first option — SQLS_SECTIONS — you determine which individual sections you want to have executed. In the example we have specified only one, and then described it in more detail directly afterwards. Each section is actually a small function called by the mk_oracle plug-in.

In this function you can then determine further details and specify for which instances (SQLS_SIDS) this section applies. In addition, you also define where the file with the SQL statements is located (SQLS_DIR), and the name of this file (SQLS_SQL). This simple configuration is sufficient to be able to see the result in Checkmk. To do this, perform a service discovery and activate the new service. Afterwards you will see this new service with the other services in the host overview:

monitoring oracle custom sqls list

7.3. Advanced options

The possibilities for monitoring your own SQLs of course go beyond the simple case shown above. In the following you will find a small overview of the available variables. For a detailed description you can also call the plug-in with the --help option. Important: Variables that can only be set outside or only inside a section function are marked accordingly. All others can be defined in both sections. If they are set outside of a section, they will apply globally to all sections.

VariableShort descriptionoptional


The self-defined section functions to be executed by the plug-in. Important: This variable can only be set outside a section function. (=global)



The instances that are to execute the section(s). Can be set globally or per section.



The file path under which your own SQLs were stored. Can be global or per section.



The file that contains the instructions for a section.



In development



The section name when you have written your own check plug-in for the individual SQLs.



The separator of the individual elements in a line as Ascii-ID. This variable can only be used in conjunction with the SQLS_SECTION_NAME variable.



Determines an individual identifier for the service name. Normally the SID and the file name of the SQLs are taken over. Important: This variable can not be used together with the variable SQLS_SECTION_NAME, and can also only be set within a section function



In development



Performs the same task as CACHE_MAXAGE.



Defines an individual user for the sections.



Defines an individual password for the sections.



Extends the connection via a SYS role for the sections.



Sets an individual TNSALIAS for the sections.


7.4. Using your own check plug-ins

If the possibilities of the syntax described above are not sufficient, you can also use the SQLS_SECTION_NAME variable to output your own section name for one or more SQLs. However, this requires that you have also written an appropriate check plug-in and included it in your Checkmk instance.

If you have written such a check plug-in, you are completely free in the syntax for the plugin’s output and can go your own way. Since this method is the most comprehensive, and also the most difficult, it is only mentioned here for completeness. It assumes that you know how to program a Checkmk check and integrate it into the monitoring instance. After that you simply assign the individual SQLs with the variables to this check plug-in.

8. Diagnostic options

8.1. Testing connections

Linux, AIX, Solaris

If you have problems connecting to one or more instances on an Oracle server, the first thing you can do is to check basic parameters. With the -t option you can display the details of a connection. In the output the dummy sections have been omitted for readability. Also note that the plug-in must be provided with the file path to its configuration file beforehand, as it relies on this. The following is an example on a Linux server:

root@linux# export MK_CONFDIR="/etc/check_mk/"
root@linux# /usr/lib/check_mk_agent/plugins/mk_oracle -t
    Operating System:       Linux
    ORACLE_HOME (oratab):   /u01/app/oracle/product/11.2.0/xe
    Logincheck to Instance: XE
    Version:                11.2
    Login ok User:          checkmk on ORA-SRV01 Instance XE
    SYNC_SECTIONS:          instance dataguard_stats processes longactivesessions sessions recovery_status undostat logswitches recovery_area performance
    ASYNC_SECTIONS:         tablespaces rman jobs ts_quotas resumable

Since this call is more likely to be made in the event of an error, you will of course receive an output, and in this output you will also receive the connection string used for the connection, and the first 100 characters of the error message returned by the instance. With the help of this information, you can quickly identify simple configuration problems and then correct them accordingly.


In development

8.2. Getting a log generated

Linux, AIX, Solaris

If the error cannot be found by checking a simple connection, the next step is to create a complete log, which logs all of the plug-in’s steps. Again, you will need the file path to the configuration file, and also the path to the plug-in’s cached data. Again, the output of the sections has been omitted here to improve readability:

root@linux# export MK_CONFDIR="/etc/check_mk/"
root@linux# export MK_VARFDIR="/var/lib/check_mk_agent/"
root@linux# /usr/lib/check_mk_agent/plugins/mk_oracle -l
Start logging to file: /var/lib/check_mk_agent/log/mk_oracle.log

When in doubt, you can use this log to identify very precisely on which line of the script the problem has occured.


In development

8.3. Debugging the plug-in

Linux, AIX, Solaris

If you can’t get to the problem, even with the help of the log, as a last resort the plug-in provides the complete output of all steps for an error analysis. This output is therefore the most comprehensive, and certainly the most difficult to read method to get to the cause of a problem, and should therefore only be used as a last resort. You can call the debugging with the -d option. Do not forget the necessary environment variable:

root@linux# export MK_CONFDIR="/etc/check_mk/"
root@linux# /usr/lib/check_mk_agent/plugins/mk_oracle -d

Important: In this output sensitive data such as passwords is not masked — so everything and every variable is readable in plain text.


In development

9. Files and directories

9.1. On an Oracle-Host under Linux, AIX and Solaris

File pathDescription


The agent that collects all data about the host.


The usual directory where the plug-ins are located. Note that the path under AIX is slightly different: /usr/check_mk/lib/plugins/mk_oracle


The configuration file for the plug-in. Again, the difference is AIX: /usr/check_mk/conf/mk_oracle.cfg


The configuration file required for the Oracle Wallet.


The configuration file which, if you specify it manually, determines an alias for a schema. Sample files are also located in the Oracle installation, but since the path differs from installation to installation, it cannot be specified in a standardised way.

9.2. On an Oracle-Host under Windows

File pathDescription

C:\Program Files (x86)\checkmk\service\check_mk_agent.exe

The agent that collects all data about the host.


The usual directory where the plug-ins are located.


The configuration file for the plug-in.


The configuration file required for the Oracle Wallet.


The configuration file which, if you specify it manually, determines an alias for a schema. Sample files are also located in the Oracle installation, but since the path differs from installation to installation, it cannot be specified in a standardised way.

9.3. On the Checkmk server

File pathDescription


Here are examples of the configuration under Linux, AIX or Solaris. The plug-in needs such a configuration to be able to fetch data, since access data can be defined there.


Examples for the configuration under Windows can be found here.


The plug-in that fetches the data on the Oracle host.


This plug-in provides data to an Oracle Cluster Manager.

On this page