FAQ
Hi,

I've have some problems with my puppet configuration, I'm managing several
Ubuntu and OpenBSD hosts.

I sometimes get on OpenBSD hosts (5.0 is OpenBSD release):
info: Retrieving plugin
err: Could not retrieve catalog from remote server: Error 400 on SERVER:
Could not find template 'ubuntu-common/5.0/etc-openntpd-ntpd.conf.erb' at
/usr/share/puppet/modules/ubuntu-common/manifests/base.pp:7 on node xxx
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

But this is in an Ubuntu template file, OpenBSD should never include this
file, according to the case $operatingsystem in site.pp.

I tracked the exact scenario which make it fail:

1. Restart puppet master - run agent only with OpenBSD hosts - it never
fails, everything works as expected
2. Restart puppet master - run agent only with Ubuntu hosts - it never
fails, everything works as expected
3. Restart puppet master - run agent with Ubuntu and OpenBSD hosts - Ubuntu
works as expected, OpenBSD fails with the above error message (after the
first Ubuntu agent was connected)
4. after scenario 3. on puppet config file changes, OpenBSD works again
until an Ubuntu agent connects to the master

Do you have an idea of what could be wrong, or an known issue of puppet?

Could it be a puppet version issue, the master and Ubuntu hosts are using
2.7.19, OpenBSD hosts are using 2.7.1? (I haven't tried to downgrade ubuntu
or upgrade OpenBSD's version, newer OpenBSD 5.1 comes with puppet 2.7.5)

According to scenario 4. could it be a caching issue on the master?

I tried some puppet.conf options:

ignorecache=true
usecacheonfailure=false

but it didn't change anything.

The master and ubuntu hosts are running on Ubuntu 12.04
# puppet --version
2.7.19
# ruby --version
ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]

The OpenBSD hosts are OpenBSD 5.0
# puppet --version
2.7.1
# ruby18 --version
ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-openbsd]

Here is my very simplified version of puppet files, I didn't try exactly
this subset of configuration, I will try to narrow down the problem to the
simplest configuration.

manifests/site.pp:

node basenode {
case $operatingsystem {
"Ubuntu" : { include "ubuntu-common::base" }
"OpenBSD" : { include "openbsd-common::base" }
default : {}
}
}

modules/ubuntu-common/manifests/base.pp:

class ubuntu-common::base {
file { "/etc/openntpd/ntpd.conf":
ensure => file,
owner => root,
group => root,
mode => 644,
content =>
template("ubuntu-common/$operatingsystemrelease/etc-openntpd-ntpd.conf.erb"),
}
}

modules/openbsd-common/manifests/base.pp:

class openbsd-common::base {
...
}

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/pLm1_gzZ9LsJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Search Discussions

  • Jcbollinger at Oct 3, 2012 at 2:43 pm

    On Tuesday, October 2, 2012 9:35:13 AM UTC-5, Mike wrote:
    Hi,

    I've have some problems with my puppet configuration, I'm managing several
    Ubuntu and OpenBSD hosts.

    I sometimes get on OpenBSD hosts (5.0 is OpenBSD release):
    info: Retrieving plugin
    err: Could not retrieve catalog from remote server: Error 400 on SERVER:
    Could not find template 'ubuntu-common/5.0/etc-openntpd-ntpd.conf.erb' at
    /usr/share/puppet/modules/ubuntu-common/manifests/base.pp:7 on node xxx
    warning: Not using cache on failed catalog
    err: Could not retrieve catalog; skipping run

    But this is in an Ubuntu template file, OpenBSD should never include this
    file, according to the case $operatingsystem in site.pp.

    I tracked the exact scenario which make it fail:

    1. Restart puppet master - run agent only with OpenBSD hosts - it never
    fails, everything works as expected
    2. Restart puppet master - run agent only with Ubuntu hosts - it never
    fails, everything works as expected
    3. Restart puppet master - run agent with Ubuntu and OpenBSD hosts -
    Ubuntu works as expected, OpenBSD fails with the above error message (after
    the first Ubuntu agent was connected)
    4. after scenario 3. on puppet config file changes, OpenBSD works again
    until an Ubuntu agent connects to the master

    Do you have an idea of what could be wrong, or an known issue of puppet?

    I am not aware of any known Puppet issue that results in behavior
    comparable to this.


    Could it be a puppet version issue, the master and Ubuntu hosts are using
    2.7.19, OpenBSD hosts are using 2.7.1? (I haven't tried to downgrade ubuntu
    or upgrade OpenBSD's version, newer OpenBSD 5.1 comes with puppet 2.7.5)

    That should not be an issue. If the master and all the clients are on
    2.7.x, and none of the clients use newer Puppet than the master, then you
    should be fine. You should be ok even with 2.6.x clients.


    According to scenario 4. could it be a caching issue on the master?

    It rather looks like one, but the key question is why would such an issue
    arise? Puppet should never use cached details of one node's catalog to
    build a different node's catalog. Although I can't rule out a Puppet bug,
    it more likely arises from a problem in your manifests or client
    configuration.

    I tried some puppet.conf options:

    ignorecache=true
    usecacheonfailure=false

    but it didn't change anything.

    Those options affect only the agent as far as I know, not the master.

    The master and ubuntu hosts are running on Ubuntu 12.04
    # puppet --version
    2.7.19
    # ruby --version
    ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]

    The OpenBSD hosts are OpenBSD 5.0
    # puppet --version
    2.7.1
    # ruby18 --version
    ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-openbsd]

    Here is my very simplified version of puppet files, I didn't try exactly
    this subset of configuration, I will try to narrow down the problem to the
    simplest configuration.

    Often the process of narrowing it down will itself reveal the problem. I
    don't see why the manifest set you presented would exhibit a problem such
    as the one you described, and since you didn't confirm that it does, I'm
    not going to do further analysis.

    My only guess at this point is that something is screwy with your SSL
    configuration. Puppet identifies nodes by the certname on the client
    certificates they present, so if you've done something like sharing the
    same client cert among all your clients then the master will not recognize
    them as different nodes. Node facts (including $hostname) do not enter
    that picture.


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/fMZeUmBcZJgJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Mike at Oct 4, 2012 at 9:06 am
    Thanks for your answer.

    I think I've found my problem

    I had an include in the ubuntu base.pp
    modules/ubuntu-common/manifests/base.pp:

    include ubuntu-something

    class ubuntu-common::base {
    file { "/etc/openntpd/ntpd.conf":
    ensure => file,
    owner => root,
    group => root,
    mode => 644,
    content =>
    template("ubuntu-common/$operatingsystemrelease/etc-openntpd-ntpd.conf.erb"),
    }
    }

    If I understand it right, OpenBSD works until this file is included in
    cache, when an Ubuntu agent connects this file is loaded and as the
    "include" is outside the class it will be included for OpenBSD too, am I
    right? At least moving the include inside the ubuntu-common::base class
    fixed my issue.

    I find it a bit annoying that something may fail depending on which agent
    connects, is there something I can do (like a puppet master option) so that
    in my example OpenBSD would fail directly and not only once an Ubuntu agent
    connects?

    Mike
    On Wednesday, October 3, 2012 4:43:16 PM UTC+2, jcbollinger wrote:


    On Tuesday, October 2, 2012 9:35:13 AM UTC-5, Mike wrote:

    Hi,

    I've have some problems with my puppet configuration, I'm managing
    several Ubuntu and OpenBSD hosts.

    I sometimes get on OpenBSD hosts (5.0 is OpenBSD release):
    info: Retrieving plugin
    err: Could not retrieve catalog from remote server: Error 400 on SERVER:
    Could not find template 'ubuntu-common/5.0/etc-openntpd-ntpd.conf.erb' at
    /usr/share/puppet/modules/ubuntu-common/manifests/base.pp:7 on node xxx
    warning: Not using cache on failed catalog
    err: Could not retrieve catalog; skipping run

    But this is in an Ubuntu template file, OpenBSD should never include this
    file, according to the case $operatingsystem in site.pp.

    I tracked the exact scenario which make it fail:

    1. Restart puppet master - run agent only with OpenBSD hosts - it never
    fails, everything works as expected
    2. Restart puppet master - run agent only with Ubuntu hosts - it never
    fails, everything works as expected
    3. Restart puppet master - run agent with Ubuntu and OpenBSD hosts -
    Ubuntu works as expected, OpenBSD fails with the above error message (after
    the first Ubuntu agent was connected)
    4. after scenario 3. on puppet config file changes, OpenBSD works again
    until an Ubuntu agent connects to the master

    Do you have an idea of what could be wrong, or an known issue of puppet?

    I am not aware of any known Puppet issue that results in behavior
    comparable to this.


    Could it be a puppet version issue, the master and Ubuntu hosts are using
    2.7.19, OpenBSD hosts are using 2.7.1? (I haven't tried to downgrade ubuntu
    or upgrade OpenBSD's version, newer OpenBSD 5.1 comes with puppet 2.7.5)

    That should not be an issue. If the master and all the clients are on
    2.7.x, and none of the clients use newer Puppet than the master, then you
    should be fine. You should be ok even with 2.6.x clients.


    According to scenario 4. could it be a caching issue on the master?

    It rather looks like one, but the key question is why would such an issue
    arise? Puppet should never use cached details of one node's catalog to
    build a different node's catalog. Although I can't rule out a Puppet bug,
    it more likely arises from a problem in your manifests or client
    configuration.

    I tried some puppet.conf options:

    ignorecache=true
    usecacheonfailure=false

    but it didn't change anything.

    Those options affect only the agent as far as I know, not the master.

    The master and ubuntu hosts are running on Ubuntu 12.04
    # puppet --version
    2.7.19
    # ruby --version
    ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]

    The OpenBSD hosts are OpenBSD 5.0
    # puppet --version
    2.7.1
    # ruby18 --version
    ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-openbsd]

    Here is my very simplified version of puppet files, I didn't try exactly
    this subset of configuration, I will try to narrow down the problem to the
    simplest configuration.

    Often the process of narrowing it down will itself reveal the problem. I
    don't see why the manifest set you presented would exhibit a problem such
    as the one you described, and since you didn't confirm that it does, I'm
    not going to do further analysis.

    My only guess at this point is that something is screwy with your SSL
    configuration. Puppet identifies nodes by the certname on the client
    certificates they present, so if you've done something like sharing the
    same client cert among all your clients then the master will not recognize
    them as different nodes. Node facts (including $hostname) do not enter
    that picture.


    John
    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/7MC_-OKXmo0J.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Jcbollinger at Oct 4, 2012 at 1:23 pm

    On Thursday, October 4, 2012 4:00:53 AM UTC-5, Mike wrote:
    Thanks for your answer.

    I think I've found my problem

    I had an include in the ubuntu base.pp
    modules/ubuntu-common/manifests/base.pp:

    include ubuntu-something

    class ubuntu-common::base {
    file { "/etc/openntpd/ntpd.conf":
    ensure => file,
    owner => root,
    group => root,
    mode => 644,
    content =>
    template("ubuntu-common/$operatingsystemrelease/etc-openntpd-ntpd.conf.erb"),
    }
    }

    If I understand it right, OpenBSD works until this file is included in
    cache, when an Ubuntu agent connects this file is loaded and as the
    "include" is outside the class it will be included for OpenBSD too, am I
    right? At least moving the include inside the ubuntu-common::base class
    fixed my issue.

    That's conceivable. Any declaration at top-level in any manifest applies,
    in principle, to all nodes. Depending on the manifest in which it appears,
    however, it may or may not actually be parsed. This is an excellent reason
    to avoid such declarations except in site.pp and any files directly or
    indirectly 'import'ed by site.pp. I cannot confirm the specifics of
    Puppet's caching strategy with respect to such declarations, but I can
    definitely say that the declaration was placed incorrectly.


    I find it a bit annoying that something may fail depending on which agent
    connects, is there something I can do (like a puppet master option) so that
    in my example OpenBSD would fail directly and not only once an Ubuntu agent
    connects


    There may be master-side cache management directives you could use, but
    something like that would have just kept the error in your manifests
    hidden, rather than revealing it. Meanwhile, you would suffer a
    performance penalty and concomitant reduction in capacity at your
    puppetmaster.

    Although it's not exactly what you were looking for, I think you can avoid
    a recurrence of this sort of problem by ensuring that all your declarations
    are inside classes or nodes, or possibly in site.pp. That should avoid
    flip flops with respect to which declarations Puppet sees for each client.


    John

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/IhZ5dk2TDVUJ.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Mike at Oct 4, 2012 at 4:24 pm
    Thank you for your help John.

    I was expecting top level declaration may cause problems, and tried to
    avoid it even before my problem raised, but didn't realize I did such one
    ;-)
    On Thursday, October 4, 2012 3:23:32 PM UTC+2, jcbollinger wrote:


    On Thursday, October 4, 2012 4:00:53 AM UTC-5, Mike wrote:

    Thanks for your answer.

    I think I've found my problem

    I had an include in the ubuntu base.pp
    modules/ubuntu-common/manifests/base.pp:

    include ubuntu-something

    class ubuntu-common::base {
    file { "/etc/openntpd/ntpd.conf":
    ensure => file,
    owner => root,
    group => root,
    mode => 644,
    content =>
    template("ubuntu-common/$operatingsystemrelease/etc-openntpd-ntpd.conf.erb"),
    }
    }

    If I understand it right, OpenBSD works until this file is included in
    cache, when an Ubuntu agent connects this file is loaded and as the
    "include" is outside the class it will be included for OpenBSD too, am I
    right? At least moving the include inside the ubuntu-common::base class
    fixed my issue.

    That's conceivable. Any declaration at top-level in any manifest applies,
    in principle, to all nodes. Depending on the manifest in which it appears,
    however, it may or may not actually be parsed. This is an excellent reason
    to avoid such declarations except in site.pp and any files directly or
    indirectly 'import'ed by site.pp. I cannot confirm the specifics of
    Puppet's caching strategy with respect to such declarations, but I can
    definitely say that the declaration was placed incorrectly.


    I find it a bit annoying that something may fail depending on which agent
    connects, is there something I can do (like a puppet master option) so that
    in my example OpenBSD would fail directly and not only once an Ubuntu agent
    connects


    There may be master-side cache management directives you could use, but
    something like that would have just kept the error in your manifests
    hidden, rather than revealing it. Meanwhile, you would suffer a
    performance penalty and concomitant reduction in capacity at your
    puppetmaster.

    Although it's not exactly what you were looking for, I think you can avoid
    a recurrence of this sort of problem by ensuring that all your declarations
    are inside classes or nodes, or possibly in site.pp. That should avoid
    flip flops with respect to which declarations Puppet sees for each client.


    John
    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/Qcq1KeW9uj0J.
    To post to this group, send email to puppet-users@googlegroups.com.
    To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppuppet-users @
categoriespuppet
postedOct 2, '12 at 3:17p
activeOct 4, '12 at 4:24p
posts5
users2
websitepuppetlabs.com

2 users in discussion

Mike: 3 posts Jcbollinger: 2 posts

People

Translate

site design / logo © 2022 Grokbase