FAQ
Hello everybody.

I'm new to the group and I'm trying to do something a little different here.

Puppet Master: CentOS 6.3 running puppet-server-2.7.14-2.1
Clients: CentOS 6.3 (puppet-2.6.17-2.el6.noarch) & SLES 11.2(puppet
2.7.14-2.1)

I am trying to configure puppet in a very dynamic way. Nodes are not
defined in nodes.conf and each client is configured soley by facts that
define the role of the node.
Certificate autosigning is turned on.
I am trying to automate the rollout of 50k clients. The clients will not
have DNS entries available at the time puppet first runs, but will have
DNS sometime an hour or so later.
During the build process the client picks a psuedo random hostname to
register to puppet with.

I am trying to figure out a solution that would be totally programatic for
registration/re-registration and be tolerant to client hostname changes.

Current issues:


- If the client ssl cert is removed from the puppet master (and
puppetmasterd is restarted), the client must "rm -rf /var/lib/puppet" and
re-run to get a valid cert
- if /var/lib/puppet is removed and the ssl certificate still lives on
the puppet master the client cannot re-register until the cert is removed
from the master

Essentially I will never be using jabber or remote commands so I don't
really care what the systems are called, if one stops working I will just
replace it with a fresh working build.
I need each system to be able to register/re-register no matter if an entry
exists on the puppet master or the local private key gets wiped out. This
would be so I can guarantee that the puppet agent will continue to run if
its hostname changes, or the local caches need to be wiped programatically
to fix a stuck puppet run.

I'm sure this issue has come up before but I can't find anything useful on
google results. I understand that these requirements are in place for
security reasons, but that is not as much of a concern in this particular
implementation (thing 50k dumb nodes that perform simple tasks). I would
prefer a more secure method, but it doesn't seem that puppet is tolerant to
dynamic nodes that might move around (regularly).

Any ideas?

Thanks,

-Mike
















--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Denmat at Feb 26, 2013 at 9:39 pm
    You could try the ruby gem uuid. That would give you a reasonably unique cert name. You would then run a scan/probe to verify current certs against current nodes and remove unused certs from the master.

    The nodes are just rebuilt with a new uuid and register. Wouldn't worry about SSL dir cleaning if security isn't high on you priorities.

    Den
    On 27/02/2013, at 5:04, yngmike wrote:

    Hello everybody.

    I'm new to the group and I'm trying to do something a little different here.

    Puppet Master: CentOS 6.3 running puppet-server-2.7.14-2.1
    Clients: CentOS 6.3 (puppet-2.6.17-2.el6.noarch) & SLES 11.2(puppet 2.7.14-2.1)

    I am trying to configure puppet in a very dynamic way. Nodes are not defined in nodes.conf and each client is configured soley by facts that define the role of the node.
    Certificate autosigning is turned on.
    I am trying to automate the rollout of 50k clients. The clients will not have DNS entries available at the time puppet first runs, but will have DNS sometime an hour or so later.
    During the build process the client picks a psuedo random hostname to register to puppet with.

    I am trying to figure out a solution that would be totally programatic for registration/re-registration and be tolerant to client hostname changes.

    Current issues:

    If the client ssl cert is removed from the puppet master (and puppetmasterd is restarted), the client must "rm -rf /var/lib/puppet" and re-run to get a valid cert
    if /var/lib/puppet is removed and the ssl certificate still lives on the puppet master the client cannot re-register until the cert is removed from the master
    Essentially I will never be using jabber or remote commands so I don't really care what the systems are called, if one stops working I will just replace it with a fresh working build.
    I need each system to be able to register/re-register no matter if an entry exists on the puppet master or the local private key gets wiped out. This would be so I can guarantee that the puppet agent will continue to run if its hostname changes, or the local caches need to be wiped programatically to fix a stuck puppet run.

    I'm sure this issue has come up before but I can't find anything useful on google results. I understand that these requirements are in place for security reasons, but that is not as much of a concern in this particular implementation (thing 50k dumb nodes that perform simple tasks). I would prefer a more secure method, but it doesn't seem that puppet is tolerant to dynamic nodes that might move around (regularly).

    Any ideas?

    Thanks,

    -Mike















    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
    To post to this group, send email to puppet-users@googlegroups.com.
    Visit this group at http://groups.google.com/group/puppet-users?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
    To post to this group, send email to puppet-users@googlegroups.com.
    Visit this group at http://groups.google.com/group/puppet-users?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Erik Dalén at Feb 26, 2013 at 9:43 pm
    For desktops and laptops that don't have a proper static hostname we have
    used the mac address (minus colons) as the certname. That is pretty much
    guaranteed to be unique and never change, might work in your situation as
    well.

    On 26 February 2013 19:04, yngmike wrote:

    Hello everybody.

    I'm new to the group and I'm trying to do something a little different
    here.

    Puppet Master: CentOS 6.3 running puppet-server-2.7.14-2.1
    Clients: CentOS 6.3 (puppet-2.6.17-2.el6.noarch) & SLES 11.2(puppet
    2.7.14-2.1)

    I am trying to configure puppet in a very dynamic way. Nodes are not
    defined in nodes.conf and each client is configured soley by facts that
    define the role of the node.
    Certificate autosigning is turned on.
    I am trying to automate the rollout of 50k clients. The clients will not
    have DNS entries available at the time puppet first runs, but will have
    DNS sometime an hour or so later.
    During the build process the client picks a psuedo random hostname to
    register to puppet with.

    I am trying to figure out a solution that would be totally programatic for
    registration/re-registration and be tolerant to client hostname changes.

    Current issues:


    - If the client ssl cert is removed from the puppet master (and
    puppetmasterd is restarted), the client must "rm -rf /var/lib/puppet" and
    re-run to get a valid cert
    - if /var/lib/puppet is removed and the ssl certificate still lives on
    the puppet master the client cannot re-register until the cert is removed
    from the master

    Essentially I will never be using jabber or remote commands so I don't
    really care what the systems are called, if one stops working I will just
    replace it with a fresh working build.
    I need each system to be able to register/re-register no matter if an
    entry exists on the puppet master or the local private key gets wiped out.
    This would be so I can guarantee that the puppet agent will continue to run
    if its hostname changes, or the local caches need to be wiped
    programatically to fix a stuck puppet run.

    I'm sure this issue has come up before but I can't find anything useful on
    google results. I understand that these requirements are in place for
    security reasons, but that is not as much of a concern in this particular
    implementation (thing 50k dumb nodes that perform simple tasks). I would
    prefer a more secure method, but it doesn't seem that puppet is tolerant to
    dynamic nodes that might move around (regularly).

    Any ideas?

    Thanks,

    -Mike
















    --
    You received this message because you are subscribed to the Google Groups
    "Puppet Users" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to puppet-users+unsubscribe@googlegroups.com.
    To post to this group, send email to puppet-users@googlegroups.com.
    Visit this group at http://groups.google.com/group/puppet-users?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    Erik Dalén

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
    To post to this group, send email to puppet-users@googlegroups.com.
    Visit this group at http://groups.google.com/group/puppet-users?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppuppet-users @
categoriespuppet
postedFeb 26, '13 at 6:04p
activeFeb 26, '13 at 9:43p
posts3
users3
websitepuppetlabs.com

3 users in discussion

Denmat: 1 post Erik Dalén: 1 post Yngmike: 1 post

People

Translate

site design / logo © 2022 Grokbase