FAQ
We have an existing "management system" of sorts, based on rdist. I'd like
to know the best way to migrate it to using puppet.

Currently, we have a local binaries tree, rdisted out nightly. We also
make use of rdist's extra capability to trigger scripts when and if named
files are updated.
I'm not sure what the best method would be, of converting this to puppet.

I havent found any puppet method that seems clearly designed for,
"replicate this large tree of files out to clients".
The local tree is 260megs, in 5800 files.
Converting all that stuff to be package based, would be a chore. It would
also meet with a great deal of pushback from admins who are used to making
changes by just logging on to the rdist master, changing the tree, and then
being done. Building a new package for every change, woud be very
unpopular.

Even if I limited the puppet involvement, to just "if file /xx/yy changes,
do z" triggers... doesnt that require that it has some master copy of
/xx/yy somewhere to compare to?
Or is it that the local puppet demon takes timestamps the first time it
runs? but then what about when the demon restarts?

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/wLuOI6no0JEJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Search Discussions

  • Luke Bigum at Apr 27, 2012 at 3:11 pm
    Hi Philip,

    I've never used rdist before, but I've just checked the man page
    quickly... How many servers have you got that's you've got a 260 MiB and
    5800 file repository? Is this a raw file count or are some of those
    files redundant (like ldap.conf going out to every single server and
    being counted 100+ times)?

    -Luke
    On 27/04/12 16:04, Philip Brown wrote:
    We have an existing "management system" of sorts, based on rdist. I'd
    like to know the best way to migrate it to using puppet.

    Currently, we have a local binaries tree, rdisted out nightly. We
    also make use of rdist's extra capability to trigger scripts when and
    if named files are updated.
    I'm not sure what the best method would be, of converting this to puppet.

    I havent found any puppet method that seems clearly designed for,
    "replicate this large tree of files out to clients".
    The local tree is 260megs, in 5800 files.
    Converting all that stuff to be package based, would be a chore. It
    would also meet with a great deal of pushback from admins who are used
    to making changes by just logging on to the rdist master, changing the
    tree, and then being done. Building a new package for every change,
    woud be very unpopular.

    Even if I limited the puppet involvement, to just "if file /xx/yy
    changes, do z" triggers... doesnt that require that it has some master
    copy of /xx/yy somewhere to compare to?
    Or is it that the local puppet demon takes timestamps the first time
    it runs? but then what about when the demon restarts?

    --
    You received this message because you are subscribed to the Google
    Groups "Puppet Users" group.
    To view this discussion on the web visit
    https://groups.google.com/d/msg/puppet-users/-/wLuOI6no0JEJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to
    puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at
    http://groups.google.com/group/puppet-users?hl=en.

    --
    Luke Bigum

    Information Systems
    Ph: +44 (0) 20 3192 2520
    luke.bigum@lmax.com | http://www.lmax.com
    LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


    FX and CFDs are leveraged products that can result in losses exceeding
    your deposit. They are not suitable for everyone so please ensure you
    fully understand the risks involved. The information in this email is not
    directed at residents of the United States of America or any other
    jurisdiction where trading in CFDs and/or FX is restricted or prohibited
    by local laws or regulations.

    The information in this email and any attachment is confidential and is
    intended only for the named recipient(s). The email may not be disclosed
    or used by any person other than the addressee, nor may it be copied in
    any way. If you are not the intended recipient please notify the sender
    immediately and delete any copies of this message. Any unauthorised
    copying, disclosure or distribution of the material in this e-mail is
    strictly forbidden.

    LMAX operates a multilateral trading facility. Authorised and regulated
    by the Financial Services Authority (firm registration number 509778) and
    is registered in England and Wales (number 06505809).
    Our registered address is Yellow Building, 1A Nicholas Road, London, W11
    4AN.

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at Apr 27, 2012 at 3:15 pm

    On Friday, April 27, 2012 8:11:38 AM UTC-7, Luke Bigum wrote:
    Hi Philip,

    I've never used rdist before, but I've just checked the man page
    quickly... How many servers have you got that's you've got a 260 MiB and
    5800 file repository? Is this a raw file count or are some of those
    files redundant (like ldap.conf going out to every single server and
    being counted 100+ times)?
    300+ servers.
    5800 files go out,(or more specifically, get synced) identically, to all
    machines, every night.
    Probably 200 of those are config files. the rest are specially compiled
    binaries and util scripts.

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/JZsUZ6MP2yoJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Luke Bigum at Apr 27, 2012 at 3:28 pm
    Ok... Not a small job ;-)

    For config files where it's exactly the same on all machines, that's
    super easy. It's just stored once on the Puppet Master and file is
    managed on all Puppet Agents.

    I'm not sure if rdist has some templating system in it for machine
    specific config files, but Puppet has a Ruby based templating system
    that can handle lots of cases. Other more complex files can be done with
    what are called "Concat" files - one file is assembled from different
    fragments.

    Scripts and especially compile binaries are tricky. Puppet's not the
    best File Server. The more files you manage as actual Puppet Files the
    more work is involved. By default the Puppet Master MD5 sums files and
    the Agent does the same to work out whether a file needs to be copied
    down. There are some improvements you can make, like changing the
    checksum method, but 260MiB to every 200 servers? Sounds like a bad idea
    before you start.

    What problem are you trying to solve that you think you need Puppet? The
    recommended Puppet way would be to package your binaries and use Puppet
    to enforce new versions of the package. You said your Admins are used to
    just getting on the rdist master, make changes and then practically an
    rsync? If that's the way you work and the way you want to continue to
    work then I don't think Puppet's going to beat rdist for this use case.

    -Luke
    On 27/04/12 16:15, Philip Brown wrote:

    On Friday, April 27, 2012 8:11:38 AM UTC-7, Luke Bigum wrote:

    300+ servers.
    5800 files go out,(or more specifically, get synced) identically, to
    all machines, every night.
    Probably 200 of those are config files. the rest are specially
    compiled binaries and util scripts.

    --
    You received this message because you are subscribed to the Google
    Groups "Puppet Users" group.
    To view this discussion on the web visit
    https://groups.google.com/d/msg/puppet-users/-/JZsUZ6MP2yoJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to
    puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at
    http://groups.google.com/group/puppet-users?hl=en.

    --
    Luke Bigum

    Information Systems
    Ph: +44 (0) 20 3192 2520
    luke.bigum@lmax.com | http://www.lmax.com
    LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


    FX and CFDs are leveraged products that can result in losses exceeding
    your deposit. They are not suitable for everyone so please ensure you
    fully understand the risks involved. The information in this email is not
    directed at residents of the United States of America or any other
    jurisdiction where trading in CFDs and/or FX is restricted or prohibited
    by local laws or regulations.

    The information in this email and any attachment is confidential and is
    intended only for the named recipient(s). The email may not be disclosed
    or used by any person other than the addressee, nor may it be copied in
    any way. If you are not the intended recipient please notify the sender
    immediately and delete any copies of this message. Any unauthorised
    copying, disclosure or distribution of the material in this e-mail is
    strictly forbidden.

    LMAX operates a multilateral trading facility. Authorised and regulated
    by the Financial Services Authority (firm registration number 509778) and
    is registered in England and Wales (number 06505809).
    Our registered address is Yellow Building, 1A Nicholas Road, London, W11
    4AN.

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at Apr 27, 2012 at 4:40 pm

    On Friday, April 27, 2012 8:28:32 AM UTC-7, Luke Bigum wrote:
    What problem are you trying to solve that you think you need Puppet? The
    recommended Puppet way would be to package your binaries and use Puppet
    to enforce new versions of the package. You said your Admins are used to
    just getting on the rdist master, make changes and then practically an
    rsync? If that's the way you work and the way you want to continue to
    work then I don't think Puppet's going to beat rdist for this use case.
    I was afraid of that. Well, even if we continue doing rdisted binary
    distribs,
    the additional "run if changed" hooks, I think might be better served by
    puppet. yes?
    The current triggers are a bit quirky. And, we do many configs by symlinks
    on individual machines, to the "standard" configs in the rdisted common
    tree. I'd rather have that stuff handled by puppet configs.
    There are "only" about 15 triggers, and 10-ish symlinks per machine.
    For symlinks, I mean stuff like

    /etc/resolv.conf -> /shared/path/resolv/resolv.conf-machinetype

    I'd rather puppet do actual COPIES of files. That works better with solaris
    patches. So I'm thinking some kind of
    puppet class, that autocopies
    /shared/path/resolv/resolv.conf-machinetype to /etc/resolv.conf
    whenever the /shared/path version gets changed by rdist.
    Is that going to work reliably?


    Triggers do things like, "if config file target has changed, restart
    demon". So, perfect puppetness there.


    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/s3m4tvQXRpMJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Luke Bigum at Apr 28, 2012 at 9:11 am
    Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
    speak it's "notify" and "subscribe" between resources, here's a very
    quick example that will restart Some Daemon if /etc/resolv.conf
    changes:

    node 'somehost' {
    class { 'resolv': }
    }

    class resolv {
    $resolv_conf = '/etc/resolv.conf'
    $service_name = 'some-daemon'

    file { $resolv_conf:
    source => "puppet:///modules/${module_name}/${resolv_conf}",
    notify => Service[$service_name],
    }

    service { $service_name:
    hasrestart => true,
    ensure => running,
    enable => true,
    require => File[$resolv_conf],
    }
    }

    The module design of Puppet pushes you in the direction of keeping
    your Puppet controlled config files inside the module that controls
    them, but it *is* possible to reference other paths on the file system
    (like your existing rdist tree) if you want, but I wouldn't mix the
    two to save confusion.

    Generally you'd put your master resolv.conf file on your Puppet Master
    somewhere like: /etc/puppet/modules/resolv/files/etc/resolv.conf

    I would start with converting all your config files over first. Once
    you realise how easy it is to write modules you might be able to
    convince some of your admins to start packaging your code, then you
    can introduce package hooks to restart services or run scripts when
    software updates and maybe by then you'll have everyone hooked ;-)

    The Puppet Package type supports a lot of different providers and OS':

    http://docs.puppetlabs.com/references/2.7.0/type.html#package

    On Apr 27, 5:40 pm, Philip Brown wrote:
    On Friday, April 27, 2012 8:28:32 AM UTC-7, Luke Bigum wrote:

    What problem are you trying to solve that you think you need Puppet? The
    recommended Puppet way would be to package your binaries and use Puppet
    to enforce new versions of the package. You said your Admins are used to
    just getting on the rdist master, make changes and then practically an
    rsync? If that's the way you work and the way you want to continue to
    work then I don't think Puppet's going to beat rdist for this use case.
    I was afraid of that. Well, even if we continue doing rdisted binary
    distribs,
    the additional "run if changed" hooks, I think might be better served by
    puppet. yes?
    The current triggers are a bit quirky. And, we do many configs by symlinks
    on individual machines, to the "standard" configs in the rdisted common
    tree. I'd rather have that stuff handled by puppet configs.
    There are "only" about 15 triggers, and 10-ish symlinks per machine.
    For symlinks, I mean stuff like

    /etc/resolv.conf ->  /shared/path/resolv/resolv.conf-machinetype

    I'd rather puppet do actual COPIES of files. That works better with solaris
    patches. So I'm thinking some kind of
    puppet class, that autocopies
    /shared/path/resolv/resolv.conf-machinetype    to /etc/resolv.conf
    whenever the /shared/path version gets changed by rdist.
    Is that going to work reliably?

    Triggers do things like, "if config file target has changed, restart
    demon". So, perfect puppetness there.
    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at Apr 28, 2012 at 2:53 pm

    On Saturday, April 28, 2012 2:11:23 AM UTC-7, Luke Bigum wrote:
    Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
    speak it's "notify" and "subscribe" between resources, here's a very
    quick example that will restart Some Daemon if /etc/resolv.conf
    changes:

    node 'somehost' {
    class { 'resolv': }
    }

    class resolv {
    $resolv_conf = '/etc/resolv.conf'
    $service_name = 'some-daemon'

    file { $resolv_conf:
    source => "puppet:///modules/${module_name}/${resolv_conf}",
    notify => Service[$service_name],
    }
    ....


    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    Once
    you realise how easy it is to write modules you might be able to
    convince some of your admins to start packaging your code,
    Sadly, the chances of getting all sysadmins to be dilligent about creating
    packages are pretty much zero.
    They only want to deal with premade downloadable packages.

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/2xg3WgKnZN8J.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Jcbollinger at Apr 30, 2012 at 1:52 pm

    On Apr 28, 9:53 am, Philip Brown wrote:
    On Saturday, April 28, 2012 2:11:23 AM UTC-7, Luke Bigum wrote:

    Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
    speak it's "notify" and "subscribe" between resources, here's a very
    quick example that will restart Some Daemon if /etc/resolv.conf
    changes:
    node 'somehost' {
    class { 'resolv': }
    }
    class resolv {
    $resolv_conf  = '/etc/resolv.conf'
    $service_name = 'some-daemon'
    file { $resolv_conf:
    source => "puppet:///modules/${module_name}/${resolv_conf}",
    notify => Service[$service_name],
    }
    ....

    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    Once
    you realise how easy it is to write modules you might be able to
    convince some of your admins to start packaging your code,
    Sadly, the chances of getting all sysadmins to be dilligent about creating
    packages are pretty much zero.
    They only want to deal with premade downloadable packages.

    It sounds like you might be able to split this into two parts. Surely
    there aren't many of those 5800 files that your sysadmins routinely
    change (else they're acting as developers, not admins). Build a
    package or packages of the binaries, containing also whatever default
    configuration files are appropriate. In the package, be sure to tag
    the config files as such. Manage the packages as Packages, and on top
    of that manage the config files as Files.

    Using such a strategy, if you give your admins access to the Puppet
    master's copies of the config files then they could work much as they
    do now: change the file on the Puppet master, and expect it to be
    rolled out to all the appropriate nodes within a predictable time.
    Unchanged files will not trigger updates on the nodes. Updates to the
    binaries will require new packages to be built, but that oughtn't to
    be your admins' duty.

    And if your admins do not appreciate the advantages of avoiding
    installing unpackaged binaries then you need better admins.


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at Apr 30, 2012 at 3:52 pm

    On Mon, Apr 30, 2012 at 6:52 AM, jcbollinger wrote:
    On Apr 28, 9:53 am, Philip Brown wrote:

    Sadly, the chances of getting all sysadmins to be dilligent about creating
    packages are pretty much zero.
    They only want to deal with premade downloadable packages.

    It sounds like you might be able to split this into two parts.  Surely
    there aren't many of those 5800 files that your sysadmins routinely
    change (else they're acting as developers, not admins).  Build a
    package or packages of the binaries, containing also whatever default
    configuration files are appropriate.

    I've already said that converting modified files to packages, was not an option.
    And if your admins do not appreciate the advantages of avoiding
    installing unpackaged binaries then you need better admins.
    whether or not that is true, unfortunately does not bring me closer to
    a usable techical solution :(

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Jcbollinger at May 1, 2012 at 1:44 pm

    On Apr 30, 10:52 am, Philip Brown wrote:
    I've already said that converting modified files to packages, was not an option.
    No, you said that getting your admins' to deploy config changes by
    packaging them up was not an option. My suggestion avoids imposing
    any need for them to do that.

    It may be that you also don't want to package up the rest, but that's
    a different story altogether. You can surely automate that process if
    you wish to do so -- nightly, say, on the same schedule that you now
    rdist -- and there are advantages to that beyond integrating Puppet
    into your infrastructure.

    In any case, the degree to which Puppet can help you is modulated by
    the degree to which you are willing to adopt techniques that work well
    with Puppet.


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Jcbollinger at May 1, 2012 at 1:59 pm

    On Apr 28, 9:53 am, Philip Brown wrote:
    On Saturday, April 28, 2012 2:11:23 AM UTC-7, Luke Bigum wrote:

    Yes, Puppet is perfect for your file-copy-and-hook scenario. In Puppet
    speak it's "notify" and "subscribe" between resources, here's a very
    quick example that will restart Some Daemon if /etc/resolv.conf
    changes:
    node 'somehost' {
    class { 'resolv': }
    }
    class resolv {
    $resolv_conf  = '/etc/resolv.conf'
    $service_name = 'some-daemon'
    file { $resolv_conf:
    source => "puppet:///modules/${module_name}/${resolv_conf}",
    notify => Service[$service_name],
    }
    ....

    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    It looks exactly like what you are now doing (i.e. no Puppet). How do
    you suppose Puppet is going to recognize that it needs to notify a
    service if it's not managing the file? Really, think about it: how
    might Puppet know a file changed if it's not the one changing it?
    Could all that extra work really be an improvement over rdist
    triggers? (Hint: not likely.)

    I think it would be useful to you to consider what you hope to achieve
    by incorporating Puppet into your infrastructure. Your rdist system
    must be working fairly well because you seem resistant to changing
    it. What, then, do you think Puppet can bring to the table?


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Adam Heinz at May 1, 2012 at 3:53 pm
    I can't say that my puppet installation is even close to best
    practices, but I think I have a situation similar enough to OP to put
    it up for scrutiny. I deploy 7600 files to /var/www/html using puppet
    and rsync. Puppet manages an rssh + chroot-jailed read-only file
    share and provides the web head with an ssh key to access it.

    This has the advantage of working around puppet's heavy weight file
    handling, but still giving you the opportunity to attach to the
    subscribe/notify infrastructure.

    $rsync = "/usr/bin/rsync -a rsync@puppet:html/ /var/www/html --delete"

    exec { "Sync /var/www/html":
    command => $rsync,
    notify => Service["httpd"],
    onlyif => "test `$rsync --dry-run --itemize-changes | wc -l` -gt 0",
    require => Host["puppet"],
    }

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at May 1, 2012 at 4:24 pm

    On Tue, May 1, 2012 at 6:58 AM, jcbollinger wrote:
    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    It looks exactly like what you are now doing (i.e. no Puppet).  How do
    you suppose Puppet is going to recognize that it needs to notify a
    service if it's not managing the file?

    That was indeed a major part of the question I have.
    I thought it keeps some kind of database of file checksums, etc?
    Doesnt puppet support some kind of
    (action if file changed), even if it doesnt "manage the file" itself?
    I think it would be useful to you to consider what you hope to achieve
    by incorporating Puppet into your infrastructure.  Your rdist system
    must be working fairly well because you seem resistant to changing
    it.  What, then, do you think Puppet can bring to the table?
    A fair question. I thought I had mentioned it, but perhaps not
    sufficiently clearly:
    I want to change our existing hosttype:/etc/file.conf ->
    /rdistbase/conf/file.conf.hosttype symlink methodology, to be more
    like

    node hosttype {
    keep /rdistbase/conf/file.conf.host synced to /etc/file.conf
    }


    Our existing methodology works well 95% of the time. There are reasons
    to keep it in place as is. But I want 100% coverage. symlinks break
    patches, and a few other things.. The only way to extend things in
    that direction that I can see, is have puppet manage the duplication
    on a per-host basis.

    Yes, I understand the "normal puppet way" of doing things, is to have
    those conf files inside the puppet tree, but it is more maintainable
    *for us*, to have all multi-host related stuff, in the single rdist
    directory tree.

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Nan Liu at May 1, 2012 at 4:36 pm

    On Tue, May 1, 2012 at 9:24 AM, Philip Brown wrote:
    On Tue, May 1, 2012 at 6:58 AM, jcbollinger wrote:

    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    It looks exactly like what you are now doing (i.e. no Puppet).  How do
    you suppose Puppet is going to recognize that it needs to notify a
    service if it's not managing the file?

    That was indeed a major part of the question I have.
    I thought it keeps some kind of database of file checksums, etc?
    Doesnt puppet support some kind of
    (action if file changed), even if it doesnt "manage the file" itself?
    I think it would be useful to you to consider what you hope to achieve
    by incorporating Puppet into your infrastructure.  Your rdist system
    must be working fairly well because you seem resistant to changing
    it.  What, then, do you think Puppet can bring to the table?
    A fair question. I thought I had mentioned it, but perhaps not
    sufficiently clearly:
    I want to change our existing hosttype:/etc/file.conf ->
    /rdistbase/conf/file.conf.hosttype symlink methodology, to be more
    like

    node hosttype {
    keep /rdistbase/conf/file.conf.host synced to /etc/file.conf
    }


    Our existing methodology works well 95% of the time. There are reasons
    to keep it in place as is. But I want 100% coverage. symlinks break
    patches, and a few other things.. The only way to extend things in
    that direction that I can see, is have puppet manage the duplication
    on a per-host basis.

    Yes, I understand the "normal puppet way" of doing things, is to have
    those conf files inside the puppet tree, but it is more maintainable
    *for us*, to have all multi-host related stuff, in the single rdist
    directory tree.
    You can use rdist to retrieve the file as usual and in the file path
    source use the local rdist directory instead of puppet:/// so there's
    no remote file retrieval from the puppet master.

    file { '/etc/file.conf':
    source => "/rdistbase/conf/file.conf.${::hostname}",
    }

    Just trigger the puppet run after your normal rdist file push.

    Thanks,

    Nan

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Jcbollinger at May 2, 2012 at 1:51 pm

    On May 1, 11:24 am, Philip Brown wrote:
    On Tue, May 1, 2012 at 6:58 AM, jcbollinger wrote:

    But that requires the files be hosted on the puppet master.
    What if the conf files are still rdisted out under /rdist/base instead?
    What does that look like?
    It looks exactly like what you are now doing (i.e. no Puppet).  How do
    you suppose Puppet is going to recognize that it needs to notify a
    service if it's not managing the file?
    That was indeed a major part of the question I have.
    I thought it keeps some kind of database of file checksums, etc?

    It checksums managed files on the client and the corresponding source
    files on the master to determine whether they match. Possibly it
    caches the master-side checksums (MD5 hashes by default), which would
    amount to pretty much what you seem to be thinking.

    Doesnt puppet support some kind of
    (action if file changed), even if it doesnt "manage the file" itself?

    There is an important concept that you need to understand to use
    Puppet effectively, or even to accurately analyze how it might work
    for you: Puppet is all about *state*. Actions that Puppet performs
    are thoroughly a secondary consideration. A great many people seem to
    miss this concept or have trouble with it. They tend to view Puppet
    as some kind of fancy script engine, and that leads to all sorts of
    misconceptions, wrong expectations, and manifest design problems.

    It is more correct and more useful to view Puppet as a state
    management service. One describes desired aspects of his nodes'
    states to Puppet, and Puppet figures out how to make it so and keep it
    so. Puppet does have ancillary features such as 'subscribe' and
    'notify' that have a functional flavor, but even these are expressed
    in terms of managed resources, and best interpreted in terms of
    state. For example, a Service may 'subscribe' to the File managing
    its configuration to express the idea that the running instance of the
    service should be using the latest configuration (a matter of the
    service instance's state).

    In short, therefore, no, Puppet does not provide for triggering
    actions based on changes to files or resources it does not manage.
    Even if it did, in your case you would be looking at duplicating work
    that rdist already does to determine whether to copy the file, and you
    would have issues with synchronizing Puppet and rdist actions.

    I think it would be useful to you to consider what you hope to achieve
    by incorporating Puppet into your infrastructure.  Your rdist system
    must be working fairly well because you seem resistant to changing
    it.  What, then, do you think Puppet can bring to the table?
    A fair question. I thought I had mentioned it, but perhaps not
    sufficiently clearly:
    I want to change our existing hosttype:/etc/file.conf ->
    /rdistbase/conf/file.conf.hosttype symlink methodology, to be more
    like

    node hosttype {
    keep /rdistbase/conf/file.conf.host synced to /etc/file.conf

    }

    Our existing methodology works well 95% of the time. There are reasons
    to keep it in place as is. But I want 100% coverage. symlinks break
    patches, and a few other things.. The only way to extend things in
    that direction that I can see, is have puppet manage the duplication
    on a per-host basis.

    Note that Puppet can keep a local file in sync with a copy *on the
    master*, but it does not naturally support keeping files in sync with
    other files on the node. You could hash something up with an Exec or
    Cron job that uses a local rsync to perform such a task, but that's a
    bit messy. It also doesn't solve the problem of synchronizing Puppet
    and rdist: what happens when Puppet (or whatever other service)
    happens to try to sync a file at the same time that rdist is updating
    it?

    Yes, I understand the "normal puppet way" of doing things, is to have
    those conf files inside the puppet tree, but it is more maintainable
    *for us*, to have all multi-host related stuff, in the single rdist
    directory tree.

    You could have it both ways by symlinking directories or individual
    files from your rdist tree into the Puppet tree on the master. Puppet
    should be ok with that.

    Alternatively, it may be that Puppet just isn't going to do what you
    hope without more changes to your existing system than you're willing
    to accept.


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Philip Brown at May 3, 2012 at 1:18 am

    On Wed, May 2, 2012 at 6:51 AM, jcbollinger wrote:
    Yes, I understand the "normal puppet way" of doing things, is to have
    those conf files inside the puppet tree, but it is more maintainable
    *for us*, to have all multi-host related stuff, in the single rdist
    directory tree.

    You could have it both ways by symlinking directories or individual
    files from your rdist tree into the Puppet tree on the master.  Puppet
    should be ok with that.

    Alternatively, it may be that Puppet just isn't going to do what you
    hope without more changes to your existing system than you're willing
    to accept.

    It seems to work just great via the trigger solution that someone else
    already suggested.
    As I said previously, the big problem Im trying to *solve*, is use of symlinks.
    After a little testing, it seems that puppet nicely solves this for us

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppuppet-users @
categoriespuppet
postedApr 27, '12 at 3:06p
activeMay 3, '12 at 1:18a
posts16
users5
websitepuppetlabs.com

People

Translate

site design / logo © 2022 Grokbase