FAQ
hi, all

I have a master for about 3000 clients, runinternal is 20 min.
A problem is that puppetdb's mq is too long so that data in postgre is
the OLD version. any suggests to improve performance ?

LOG like this:

[root@ppmaster /home/puppetdb]# du /home/puppetdb/ -sh
381M /home/puppetdb (and grow biger and biger)

[root@ppmaster /home/puppetdb]# tac /home/puppet/log/puppetmaster.log |
fgrep entry111.***.com | head
Sat Dec 15 01:46:20 +0800 2012 //entry111.***.com/Puppet (notice):
Finished catalog run in 31.02 seconds
Sat Dec 15 01:45:45 +0800 2012 Puppet (notice): Compiled catalog for
entry111.***.com in environment production in 1.29 seconds


[root@ppmaster /home/puppetdb]# tac
/var/log/puppetdb/puppetdb-daemon.log | fgrep entry111.***.com | head
2012-12-15 01:34:52,717 INFO [puppetdb.command]
[0a306a6b-2188-432d-b5d9-eb7253e89a8f] [replace catalog] entry111.***.com


puppetdb=> select * from certname_catalogs where certname =
'entry111.163.com';
certname | catalog |
timestamp
------------------+------------------------------------------+----------------------------
entry111.***.com | c75054f00c76b80ff33b4ceb8f195e74dabd630f |
2012-12-15 01:05:36.153+08

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Search Discussions

  • Deepak Giridharagopal at Dec 14, 2012 at 8:02 pm

    On Fri, Dec 14, 2012 at 11:00 AM, scukaoyan@gmail.com wrote:

    hi, all

    I have a master for about 3000 clients, runinternal is 20 min.
    A problem is that puppetdb's mq is too long so that data in postgre is the
    OLD version. any suggests to improve performance ?
    We'll need some more information in order to get a complete picture of
    what's going on. Can you send a screenshot of your puppetdb web console
    after you've had it up for a few minutes? In particular, we'd need to see
    the rate at which your queue is growing, metrics around command processing
    time and enqueue time, as well as information about catalog & resource
    duplication rates. Also, what version of Postgres are you using, and have
    you tuned any of the settings? Lastly, is the puppetdb system experiencing
    a lot of iowait, or is it otherwised blocked on i/o or system time?

    With that many nodes and a 20 minute runinterval, you'll be sending ~2.5
    catalogs every second to puppetdb...it's not insurmountable, but we'll need
    to look at the whole system in totality to see where the bottleneck is. :)

    The first-order thing I'd look at is your catalog duplication rate...if
    that's low, then puppetdb is doing a lot of reads/writes to the database
    for every catalog received. Improving that number will help let puppetdb
    short-circuit writing to disk at all, which should improve throughput
    considerably.

    You can also come and find us on IRC at #puppet on Freenode; that may make
    for a faster back-and-forth.

    Thanks!

    --
    deepak / Puppet Labs

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppuppet-users @
categoriespuppet
postedDec 14, '12 at 7:05p
activeDec 14, '12 at 8:02p
posts2
users2
websitepuppetlabs.com

People

Translate

site design / logo © 2022 Grokbase