FAQ
Hey Salt users,

I'm going to work on a monitoring solution that uses Salt to execute
checks using Salt's execution modules and returning data to a data store
like Elasticsearch.
The idea is to make it very modular and the processing logic easy to
customize for several use cases.

Since Salt and Elasticsearch is doing most of the whole work it should
be easy to build the missing part "check/ job processing" and
"notification/ event triggering".

I'm looking for some comments about this. Some suitable use cases would
be nice, especially ones that aren't that easy to implement with today's
monitoring giants Nagios and friends.

Is there any interest in this project? Feel free to add your feature
requests on Github (<https://github.com/matchBIT/elija-monitoring>).

Thanks!


Arnold

--
Arnold Bechtoldt

Karlsruhe, Germany

Search Discussions

  • T.J. Yang at Nov 3, 2014 at 2:12 pm
    Hi Arnold,


    Thanks to initiate this elija project, looking forward to see this project
    got implemented.

    What software you used to draw the architecture diagram ?
    Can you release the src to github also ?

    tjy
    On Saturday, November 1, 2014 6:02:37 AM UTC-5, Arnold Bechtoldt wrote:

    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring>).

    Thanks!


    Arnold

    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Arnold Bechtoldt at Nov 3, 2014 at 2:36 pm
    What software you used to draw the architecture diagram ?
    It's gliffy from Atlassian, nothing special.

    Can you release the src to github also ?
    The source code? Yes of course. I already started developing and am
    going to publish the first set as soon as I think the code quality is
    acceptable for the world. ;)


    Arnold

    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 03.11.14 15:12, T.J. Yang wrote:
    Hi Arnold,


    Thanks to initiate this elija project, looking forward to see this
    project got implemented.

    What software you used to draw the architecture diagram ?
    Can you release the src to github also ?

    tjy

    On Saturday, November 1, 2014 6:02:37 AM UTC-5, Arnold Bechtoldt wrote:

    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data
    store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with
    today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring
    <https://github.com/matchBIT/elija-monitoring>>).

    Thanks!


    Arnold

    --
    Arnold Bechtoldt

    Karlsruhe, Germany

    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/d/optout.
  • T.J. Yang at Nov 3, 2014 at 3:06 pm
    Thanks for the quick reply, I will move my elija discussion to your github.
    On Monday, November 3, 2014 8:36:32 AM UTC-6, Arnold Bechtoldt wrote:

    What software you used to draw the architecture diagram ?
    It's gliffy from Atlassian, nothing special.

    Can you release the src to github also ?
    The source code? Yes of course. I already started developing and am
    going to publish the first set as soon as I think the code quality is
    acceptable for the world. ;)


    Arnold

    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 03.11.14 15:12, T.J. Yang wrote:
    Hi Arnold,


    Thanks to initiate this elija project, looking forward to see this
    project got implemented.

    What software you used to draw the architecture diagram ?
    Can you release the src to github also ?

    tjy

    On Saturday, November 1, 2014 6:02:37 AM UTC-5, Arnold Bechtoldt wrote:

    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data
    store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with
    today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring
    <https://github.com/matchBIT/elija-monitoring>>).

    Thanks!


    Arnold

    --
    Arnold Bechtoldt

    Karlsruhe, Germany

    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+...@googlegroups.com <javascript:>
    <mailto:salt-users+unsubscribe@googlegroups.com <javascript:>>.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Daniel Jagszent at Nov 3, 2014 at 3:54 pm
    Hello Arnold,

    I tried using Salt's 0MQ for executing checks on the minions. I wrote a
    simple daemon on the master that executed (custom) Salt modules on the
    minions in periodic intervals and pushed the results every 60 seconds or
    so to an Icinga instance (also via Salt). Even with some optimizations
    (combine checks as much as possible, spread out the checks to decrease
    the master load, limit the number of parallel checks) I ran into scaling
    problems with only approx. 60 minions.

    I had to decrease the keep_jobs option to 1 hour. The default of 24
    hours would result in millions of files in job cache directory – Salt's
    job garbage collection could not handle this. I also had to increase the
    worker_threads to approx. the amount of Minons otherwise timeouts in
    executing salt modules would be too common. (Even after increasing the
    worker_threads timeouts occurred every now and then). That's why we
    abandoned using Salt's 0MQ communication for monitoring. We now use (of
    course Salt managed) NSCA-ng server/clients to collect the monitor checks.

    Maybe RAET will solve this problems. Maybe they were uniq to my setup in
    the first place. Anyways, I'm looking forward to see how Elija evolves.

    PS: Here you can find the (quite undocumented) daemon I used:
    https://gist.github.com/d--j/317b28a5fb14ac89227f
    Arnold Bechtoldt 1. November 2014 12:02
    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring>).

    Thanks!


    Arnold
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mohammad H. Al Shami at Nov 3, 2014 at 4:11 pm
    Hello Daniel,

    I was thinking about doing that exact same thing, I also configured a proxy-minion to check devices like switches and router that can't run the minion, only finished a draft a few hours ago.

    Were you doing the checks one minion at a time or were you doing role based scheduling, pretty much like Sensu (salt -G 'role:web' cmd.run check -returner some_returner)? I think doing role based scheduling should have a bit more performance. I was thinking of using redis as the retuner which should be able to handle 60 users, or am I mistaken?

    Never thought the job cache could be an issue, that would be an interesting discussion


    From: salt-users@googlegroups.com On Behalf Of Daniel Jagszent
    Sent: Monday, November 3, 2014 5:54 PM
    To: salt-users@googlegroups.com
    Subject: Re: [salt-users] [RFC] A monitoring solution based on the powers of Salt

    Hello Arnold,

    I tried using Salt's 0MQ for executing checks on the minions. I wrote a simple daemon on the master that executed (custom) Salt modules on the minions in periodic intervals and pushed the results every 60 seconds or so to an Icinga instance (also via Salt). Even with some optimizations (combine checks as much as possible, spread out the checks to decrease the master load, limit the number of parallel checks) I ran into scaling problems with only approx. 60 minions.

    I had to decrease the keep_jobs option to 1 hour. The default of 24 hours would result in millions of files in job cache directory - Salt's job garbage collection could not handle this. I also had to increase the worker_threads to approx. the amount of Minons otherwise timeouts in executing salt modules would be too common. (Even after increasing the worker_threads timeouts occurred every now and then). That's why we abandoned using Salt's 0MQ communication for monitoring. We now use (of course Salt managed) NSCA-ng server/clients to collect the monitor checks.

    Maybe RAET will solve this problems. Maybe they were uniq to my setup in the first place. Anyways, I'm looking forward to see how Elija evolves.

    PS: Here you can find the (quite undocumented) daemon I used: https://gist.github.com/d--j/317b28a5fb14ac89227f

    [cid:image001.jpg@01cff791.8ff17b80]
    Arnold Bechtoldt
    1. November 2014 12:02
    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring><https://github.com/matchBIT/elija-monitoring>).

    Thanks!


    Arnold
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com .
    For more options, visit https://groups.google.com/d/optout.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Daniel Jagszent at Nov 3, 2014 at 4:46 pm
    Hello Mohammad,

    I did the checks one minon at a time but combined several checks into
    one call. (e.g. checking CPU load, Swap usage, Disk usage in one go).
    Maybe doing role based checks would be more performant but my setup is
    too heterogenous to be able to identify strict roles and I have quite
    some local exceptions (e.g. I need to be able to exclude a certain mount
    point from disk usage checks on one minion or add a custom check for a
    single minion) so I could not try this.
    AFAIK, even with using role based checks, the job cache would have
    approx. the same amount of files in it (basically one file per job per
    minon).
    I have not experimented with using a returner. I wanted to use Salt in
    the first place to simplify the whole firewall problem in a
    multi-datacenter, multi VLAN setup. If the minon can communicate with
    the master the monitor checks also automatically work. If you use a
    custom returner this returner must be accessible from all minions. In my
    case that meant basically open up the redis server to the whole internet.
    Mohammad H. Al Shami 3. November 2014 17:11

    Hello Daniel,



    I was thinking about doing that exact same thing, I also configured a
    proxy-minion to check devices like switches and router that can’t run
    the minion, only finished a draft a few hours ago.



    Were you doing the checks one minion at a time or were you doing role
    based scheduling, pretty much like Sensu (salt -G ‘role:web’ cmd.run
    check –returner some_returner)? I think doing role based scheduling
    should have a bit more performance. I was thinking of using redis as
    the retuner which should be able to handle 60 users, or am I mistaken?



    Never thought the job cache could be an issue, that would be an
    interesting discussion



    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Arnold Bechtoldt at Nov 4, 2014 at 7:09 pm
    I have not experimented with using a returner.
    I think trying this can be very valuable. There are applications
    specialised on simply storing and managing this data. :)

    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 03.11.14 17:45, Daniel Jagszent wrote:
    Hello Mohammad,

    I did the checks one minon at a time but combined several checks into
    one call. (e.g. checking CPU load, Swap usage, Disk usage in one go).
    Maybe doing role based checks would be more performant but my setup is
    too heterogenous to be able to identify strict roles and I have quite
    some local exceptions (e.g. I need to be able to exclude a certain mount
    point from disk usage checks on one minion or add a custom check for a
    single minion) so I could not try this.
    AFAIK, even with using role based checks, the job cache would have
    approx. the same amount of files in it (basically one file per job per
    minon).
    I have not experimented with using a returner. I wanted to use Salt in
    the first place to simplify the whole firewall problem in a
    multi-datacenter, multi VLAN setup. If the minon can communicate with
    the master the monitor checks also automatically work. If you use a
    custom returner this returner must be accessible from all minions. In my
    case that meant basically open up the redis server to the whole internet.
    Mohammad H. Al Shami > 3. November 2014 17:11

    Hello Daniel,



    I was thinking about doing that exact same thing, I also configured a
    proxy-minion to check devices like switches and router that can’t run
    the minion, only finished a draft a few hours ago.



    Were you doing the checks one minion at a time or were you doing role
    based scheduling, pretty much like Sensu (salt -G ‘role:web’ cmd.run
    check –returner some_returner)? I think doing role based scheduling
    should have a bit more performance. I was thinking of using redis as
    the retuner which should be able to handle 60 users, or am I mistaken?



    Never thought the job cache could be an issue, that would be an
    interesting discussion



    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/d/optout.
  • Arnold Bechtoldt at Nov 4, 2014 at 7:06 pm
    Hey guys,

    Sorry for my late reply.

    Matthew (<https://github.com/mgwilliams/>) told me that he startet
    working on a _very similiar_ solution but with more Salt integration.
    Check <https://github.com/mgwilliams/salt/compare/workarea> for a
    current diff and
    <https://github.com/mgwilliams/salt-monitoring/blob/master/salt-monitoring.pdf>
    for some background information.

    I'm going to work him on this approach to get this into Salt upstream
    very soon.

    Even with some optimizations
    (combine checks as much as possible, spread out the checks to decrease
    the master load, limit the number of parallel checks) I ran into scaling
    problems with only approx. 60 minions.
    Hm, have you already tried this with 2014.7?

    Are you sure it's the message bus instead of the job management on
    master-side (which can be outsourced when having lots of data)?



    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 03.11.14 16:54, Daniel Jagszent wrote:
    Hello Arnold,

    I tried using Salt's 0MQ for executing checks on the minions. I wrote a
    simple daemon on the master that executed (custom) Salt modules on the
    minions in periodic intervals and pushed the results every 60 seconds or
    so to an Icinga instance (also via Salt). Even with some optimizations
    (combine checks as much as possible, spread out the checks to decrease
    the master load, limit the number of parallel checks) I ran into scaling
    problems with only approx. 60 minions.

    I had to decrease the keep_jobs option to 1 hour. The default of 24
    hours would result in millions of files in job cache directory – Salt's
    job garbage collection could not handle this. I also had to increase the
    worker_threads to approx. the amount of Minons otherwise timeouts in
    executing salt modules would be too common. (Even after increasing the
    worker_threads timeouts occurred every now and then). That's why we
    abandoned using Salt's 0MQ communication for monitoring. We now use (of
    course Salt managed) NSCA-ng server/clients to collect the monitor checks.

    Maybe RAET will solve this problems. Maybe they were uniq to my setup in
    the first place. Anyways, I'm looking forward to see how Elija evolves.

    PS: Here you can find the (quite undocumented) daemon I used:
    https://gist.github.com/d--j/317b28a5fb14ac89227f
    Arnold Bechtoldt > 1. November 2014 12:02
    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring>).

    Thanks!


    Arnold
    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/d/optout.
  • Daniel Jagszent at Nov 5, 2014 at 4:48 pm

    Arnold Bechtoldt wrote:
    Even with some optimizations
    (combine checks as much as possible, spread out the checks to decrease
    the master load, limit the number of parallel checks) I ran into scaling
    problems with only approx. 60 minions.
    Hm, have you already tried this with 2014.7?

    Are you sure it's the message bus instead of the job management on
    master-side (which can be outsourced when having lots of data)?
    No, I did not try it with 2014.7. This was in the early days of 2014.1. Nowadays we use NSCA-ng for monitor checks. That scales far better and so I do not have the need to investigate this scaling problem further. What do you mean with «job management»? The job cache? Maybe the many files in the job cache directory were the culprit. But increasing the number of worker threads had helped with combating the timeouts. That would – in my opinion – be a pointer that the job cache was not the scaling bottleneck.
    I have not experimented with using a returner.
    I think trying this can be very valuable. There are applications
    specialised on simply storing and managing this data.
    Using an external returner means that this returner needs to be
    accessible from all minions (In my case that machine need to be in
    several VLANs and external Firewalls need to be opened up for the
    service). If I have to do all that hassle anyways I rather install
    NSCA-ng, configure that via Salt and be happy.


    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Les Mikesell at Nov 3, 2014 at 4:57 pm

    On Sat, Nov 1, 2014 at 6:02 AM, Arnold Bechtoldt wrote:
    Hey Salt users,

    I'm going to work on a monitoring solution that uses Salt to execute
    checks using Salt's execution modules and returning data to a data store
    like Elasticsearch.
    The idea is to make it very modular and the processing logic easy to
    customize for several use cases.

    Since Salt and Elasticsearch is doing most of the whole work it should
    be easy to build the missing part "check/ job processing" and
    "notification/ event triggering".

    I'm looking for some comments about this. Some suitable use cases would
    be nice, especially ones that aren't that easy to implement with today's
    monitoring giants Nagios and friends.

    Is there any interest in this project? Feel free to add your feature
    requests on Github (<https://github.com/matchBIT/elija-monitoring>).
    Would that be able to forward things like syslog and snmp traps that
    originate independently through the salt plumbing to the common
    database or does it just handle output of things started as salt jobs?

    --
        Les Mikesell
          lesmikesell@gmail.com

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Ryan John Peck at Nov 4, 2014 at 5:55 pm
    Les -

    I'm guessing he intends to use salt.modules to gather running information
    from the systems - and stop their. Routing syslog and snmp traps and
    probably best suited for log aggregator like applications.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Les Mikesell at Nov 4, 2014 at 6:45 pm

    On Tue, Nov 4, 2014 at 11:55 AM, Ryan John Peck wrote:
    Les -

    I'm guessing he intends to use salt.modules to gather running information
    from the systems - and stop their. Routing syslog and snmp traps and
    probably best suited for log aggregator like applications.
    That makes sense for the sort of thing that salt can fix by itself,
    but if you don't extend an existing framework you have to re-invent a
    lot of concepts involving escalating events into alarms and
    notifications and matching them up with acknowledgments to have a real
    monitoring solution. I was just thinking it would be nice to be able
    to glue the mature tools like syslog and snmp to a central system with
    the transport provided by salt instead of configuring everything
    individually with routing and firewall openings.

    --
       Les Mikesell
          lesmikesell@gmail.com

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Arnold Bechtoldt at Nov 4, 2014 at 7:15 pm
    When using syslog as source for the monitoring/ check data you might be
    interested in fluentd, though I never tried it so far.

    <http://www.fluentd.org/architecture>

    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 04.11.14 19:45, Les Mikesell wrote:
    On Tue, Nov 4, 2014 at 11:55 AM, Ryan John Peck wrote:
    Les -

    I'm guessing he intends to use salt.modules to gather running information
    from the systems - and stop their. Routing syslog and snmp traps and
    probably best suited for log aggregator like applications.
    That makes sense for the sort of thing that salt can fix by itself,
    but if you don't extend an existing framework you have to re-invent a
    lot of concepts involving escalating events into alarms and
    notifications and matching them up with acknowledgments to have a real
    monitoring solution. I was just thinking it would be nice to be able
    to glue the mature tools like syslog and snmp to a central system with
    the transport provided by salt instead of configuring everything
    individually with routing and firewall openings.
  • Arnold Bechtoldt at Nov 4, 2014 at 7:12 pm
    Ack.

    Check
    <https://github.com/mgwilliams/salt/compare/workarea#diff-99ce7937bfbdd10ff13b774473f0991cR115>.

    I plan to suggest some changes to this function but I think as an
    example/ PoC function it does its job well. :)


    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 04.11.14 18:55, Ryan John Peck wrote:
    Les -

    I'm guessing he intends to use salt.modules to gather running
    information from the systems - and stop their. Routing syslog and snmp
    traps and probably best suited for log aggregator like applications.

    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/d/optout.
  • Matthew Williams at Nov 4, 2014 at 7:22 pm
    The monitoring branch (https://github.com/mgwilliams/salt/compare/monitoring)
    might be a better place to look at the relevant files. 'workarea' is
    monitoring plus any of my pull requests to upstream that have not yet been
    merged.
    On Tue, Nov 4, 2014 at 2:12 PM, Arnold Bechtoldt wrote:

    Ack.

    Check
    <
    https://github.com/mgwilliams/salt/compare/workarea#diff-99ce7937bfbdd10ff13b774473f0991cR115
    .
    I plan to suggest some changes to this function but I think as an
    example/ PoC function it does its job well. :)


    --
    Arnold Bechtoldt

    Karlsruhe, Germany
    On 04.11.14 18:55, Ryan John Peck wrote:
    Les -

    I'm guessing he intends to use salt.modules to gather running
    information from the systems - and stop their. Routing syslog and snmp
    traps and probably best suited for log aggregator like applications.

    --
    You received this message because you are subscribed to the Google
    Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to salt-users+unsubscribe@googlegroups.com
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupsalt-users @
postedNov 1, '14 at 11:02a
activeNov 5, '14 at 4:48p
posts16
users7

People

Translate

site design / logo © 2022 Grokbase