Extending #MSOMS Log Analytics through Linux – part 1

#Extending #MSOMS Log Analytics through Linux – part 1


I don’t know how many articles this series will end up with, it’s basically another journey I started trying to solve real world issues.
This time I needed to monitor Kemp loadmaster devices from OMS and sepcifically using Log Analytics (LA).
As you may know I published long ago a SCOM Management Pack for Kemp, it is still doing its decent job, but what if a customer wants to be completely cloud borne?
So I explored the options to extend ingested data into OMS:

  1. you can write a custom management pack that uses write actions to send all sort of antive data to LA, but this requires a SCOM infrastrcuture.
  2. you can use the ingestion API, if you don’t know what it is, take a look at the meetup from my good friend Tao
  3. you can use custom log file ingestion

Now to be completely cloud borne we must remove option 1., custom log ingestion is not an option since it lacks flexibility, so only the ingestion API remains.
This API is great indeed, but it has a drawback it just creates custom data types in LA, it is not possible to mimic a performance data point (in LA terms Type:Perf) or any other “native” type.
The perf data points are especially important because in LA they’re treated with custom views called “Metrics”.
When I was thinking about all these scenarios a couple of articles came to my rescue and lit up another path.
Here comes the OMS linux agent, it is a great extensibility story I started to investigate in a previous post of mine:

  • the agent is open source on github, you can fork, test and extend the code, even if you cannot yet contribute
  • the agent uses fluentd and I discovered it is a world on its own
  • the agent can be easily extended writing your own fluentd plugins!

I want to emphasise the last sentence: the agent can be easily extended writing your own fluentd plugins! and/or picking existing plugins and adding them to the agent toolset.
This way we can send LA all the native types and create our own (using the ingestion API).

Second Foreword

I used to teach Unix (uh well Xenix) back in the early nineties, but then I focused on Microsoft solutions and built my professional career there.
Obviously Unix and Linux were still there, but all I did was integration into Microsoft architectures and some basic monitoring. Why am I writing this? Because my knowledge is more than rusty, so from a *nix professional standpoint what I’m going to post will be obvious, probably not optimized and not following consolidated best practices (that I ignore btw). Add to the picture that fluentd plugins are written in ruby, a language I don’t know anything about, and you can understand there are going to be a couple of things that can be improved :-).

The basics


In a few words this is how the entire process works

  • the agent reads the (fluentd) configuration files. Every workflow has an input plugin and on output plugin. Many have some filter and trasformation plugins. The input plugin configurations specify the data tag. The tag is used to build the subsequent workflows matching the other filters. In the following snippet I declare a syslog input listening on port 25326/udp and I tag all the data from that source as oms.qnd.Kemp
  type syslog
  port 25326
  protocol_type udp
  tag oms.qnd.Kemp
  log_level debug
  • when a source plugin has data the fluentd workflow starts and the data is processed by the plugins that declare a match on that tag. In the following snippet I instruct fluent to invoke a custom filter for the data tagged oms.qnd.Kemp. Please note the ** that basically means “and everything that comes after”. In fact many source plugins add suffixes to the defined tag to better categorize the data
<filter oms.qnd.Kemp.**>
  type filter_kemp
  • applying the same mechanics the data stream reaches an output plugin that, in our case, sends data to LA. The agent defines the following output plugins
    • out_oms.rb I call this the standard output to LA, it is used for the vast majority of the builtin sources, such as syslog, performance, docker and so on
    • out_oms_api.rb this is the ingestion API filter, so it lets your post your custom logs from a linux box
    • out_oms_blob.rb don’t know yet what it is used for
    • out_oms_statsd_aggregator.rb this is the output plugin for statsd instrumented applications. I learnt that statsd is a lingua franca for instrumenting applications and sending telemetries to statsd enabled monitoring solution, such as our own OMS agent

How CollectD fits in the picture

I see CollectD as a companion of FluentD to gather perfomance data points and probably more, but right now the OMS Agent filter for collectd just translates the data into performance points.
The collectd configuration is as following (for more details see references):

  • you must configure collectd accordingly to its syntax as you would have done for any other collectd implementation
  • you must instruct collectd to send data to an http source plugin in OMS
LoadPlugin write_http

<Plugin write_http>
         <Node "oms">
         URL ""
         Format "JSON"
         StoreRates true
  • the oms agent is configured to listen for http input on port 26000 by default, the uri is the data tag. So from the configuration above the collectd data is tagged as “oms.collectd”. The http input can be used by any piece of software capable to perform a post with a json payload.
 type http
  port 26000

File locations and debugging

  • Where must I add my fluentd plugins? /opt/microsoft/omsagent/plugin in this directory, used by the Microsoft provided plugins, you must add all the plugins you want the agent to use.
    Where must I add my configurations for OMS agent? /etc/opt/microsoft/omsagent/conf/omsagent.d all the files with a “.conf” extension are automatically added to the agent configuration at agent startup. So if you add or modify a configuration you must restart the oms agent (sudo service omsagent restart)
  • Where are the log files? /var/opt/microsoft/omsagent/log
  • How can I debug fluentd plugins? don’t know how to perform a step by step debugging (yet)
  • How can I trace fluentd plugins? ok, it’s not easy debugging but adding trace statements and setting the log_level property in the conf file (I suggest trace) you can check the old way your plugin execution. The tracing is written in omsagent.log by default. In your ruby code you can use @log.trace “message” to write at trace level, you can either use info, debug, wanr and error. See logging in fluentd
    Where are the collectd custom configuration files? /etc/collectd/collectd.conf.d

  1. Extending #MSOMS with the Linux Agent | Quae Nocent Docent
  2. Announcing Kemp Application Delivery #MSOMS solution preview | Quae Nocent Docent
  3. Operations Management Suite (OMS) – Tech Guide
  4. #Extending #MSOMS Log Analytics through Linux – part 2 – custom ruby filter | Quae Nocent Docent

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: