Could anyone point out a good existing discussion of Puppet scalability?
I'm relating to the Puppet master and ecosystem parts that are not the
actual agents sitting on the managed servers.
In particular anything that would shed insight upon:
1. Does a Puppet master gracefully degrade when overwhelmed?
Or does stuff start failing, instead of just performing more slowly.
2. How does changing the Puppet polling interval (runinterval etc.) factor
Does Puppet make it safe to increase workload and polling frequency
knowing that at worst there'll be slowness, or does it leave it to the
operator's gut and trial and error figuring how much load is fine,
requiring of them to throttle workloads and hold their breaths when rolling
out changes. Can workload be cancelled in case of excessive loads same as
handling parallel FTP jobs, or this approach is not a design tenet at
I'm assuming the answers in this area are not 0 nor 1's, so a balanced
discussion of how close are things to 0 or to 1 is of interest.
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/bNuzUMUUxJIJ.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to firstname.lastname@example.org.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.