FAQ
I've been experiencing what appears to be a subtle bug in Puppet, and I'm
wondering if anybody has in idea on a good workaround. The bug report is
here: http://projects.puppetlabs.com/issues/9277

I'm using definitions in puppet to manage 3 different sets of file
resources related to the websites we deploy in our LAMP environment: apache
modules, vhosts, and website docroots. We need to be able to easily deploy
hundreds of different websites on different servers. And for each of those
websites, those are the 3 things that need to get managed (the apache
modules needed to be installed and active on the server, the vhost
configuration for that specific website, and the data files for that
specific website which holds all the .html, .php, etc. files).

So for each of these entities, I'm using definitions. For example, every
site needs to have it's own vhost file. So I've create a definition for
vhosts that takes in the parameters for that particular site, and creates
the vhost file properly using a .erb template. All of these vhost files get
placed into /etc/apache2/sites-enabled. 1 file for each website.

But here's the hard part. When I remove one of the websites from being
managed by puppet, I want puppet to clean up those resources that are no
longer needed. Apache should no longer serve the page for that website once
I've removed it from puppet. I want puppet to remove the vhost file in
/etc/apache2/sites-enabled/. So I set up that directory to be a "file"
resource (/etc/apache2/sites-enabled/" and use the "purge" option.

The problem is that every time puppet runs, it deletes *every* vhost file
in that directory, and then re-creates the ones I have configured using my
vhost definition. Puppet is unable to realize that the vhost files created
by my vhost definition are "managed" files, and therefore should not be
purged in the first place.

This becomes a BIG problem once I try and manage all the website files for
each of our hundreds of websites. This adds up to about 1G of data. So
every time puppet runs, it purges and re-creates about 1G of files for all
those different websites. This is obviously having a huge performance
impact, especially where filebucketing is concerned.

So, I'm trying to figure out a way around this until this bug is fixed
because I have a feeling I'm gonna be waiting a while. I've had to turn off
purging all together for now because of the performance issues. So if I
remove a website from my puppet config, I need to manually go into each
webserver and remove the related files for that website.

So to recap, here is an example:

I want to purge all the vhosts files that are no longer being managed by
puppet:
file { "/etc/apache2/sites-available":
ensure => directory,
mode => 0755,
purge => true,
recurse => true,
require => Class["webserver::install"],
notify => Class["webserver::service"],
}


I also want to create a vhost file for each site I want configured:

First I create a definition:

define webserver::vhost ( $docroot, $serveralias ) {
.... blah, blah code, create the vhost, etc.
}

Then call this definition for each website I want enabled:

webserver::vhost { 'fakewebsite.com':
docroot => '/var/www/fakewebsite.com/html',
serveralias => '*.fakewebsite.com',
}

Since the file created by webserver::vhost is managed in a different
resource than the "file" resource for the /etc/apache2/sites-available
directory, puppet purges and re-creates this vhost file every time puppet
is run. The same thing also happens for 1G of every .html, .php, .jpg, etc.
file that is placed into /var/www/*/html for each website, every time
puppet runs rsync to check for new versions of the website code/data.

The only way I can think of to get around this, is to create my own custom
resources, and then use the "resources" metaparameter. So if I could define
my own custom resource for a vhost file, and then set up a vhost file like
this:

vhost { 'fakewebsite.com':
docroot => blah,
serveralias => blah,
}

and then use the "resources" metaparameter:

resources { 'vhost': purge => true }

This "should" tell puppet to purge any vhost resources on the *entire*
system that aren't managed by puppet? Is this the correct way to go about
this? I am concerned, because if I take this route, it seems like puppet
will purge everything on the entire server that it believes to be a vhost
file not managed by puppet. If I mess something up in my vhost resource
definition, I could potentially destroy my server. And I don't want puppet
going into /home/ directories and purging backup/test vhost files that
users may be testing with.

Any advice?

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/31vNzVKflw0J.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to [email protected].
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Search Discussions

  • Nan Liu at May 5, 2012 at 9:15 pm

    On Sat, May 5, 2012 at 12:17 PM, Miles Stevenson wrote:
    I've been experiencing what appears to be a subtle bug in Puppet, and I'm
    wondering if anybody has in idea on a good workaround. The bug report is
    here: http://projects.puppetlabs.com/issues/9277
    This only affect files in the subdirectory created/managed by
    something other than the file resource type. If you are wrapping a
    file resource in a define type, you should not be affected. Here's a
    gist example: https://gist.github.com/2605292.
    I'm using definitions in puppet to manage 3 different sets of file resources
    related to the websites we deploy in our LAMP environment: apache modules,
    vhosts, and website docroots. We need to be able to easily deploy hundreds
    of different websites on different servers. And for each of those websites,
    those are the 3 things that need to get managed (the apache modules needed
    to be installed and active on the server, the vhost configuration for that
    specific website, and the data files for that specific website which holds
    all the .html, .php, etc. files).

    So for each of these entities, I'm using definitions. For example, every
    site needs to have it's own vhost file. So I've create a definition for
    vhosts that takes in the parameters for that particular site, and creates
    the vhost file properly using a .erb template. All of these vhost files get
    placed into /etc/apache2/sites-enabled. 1 file for each website.

    But here's the hard part. When I remove one of the websites from being
    managed by puppet, I want puppet to clean up those resources that are no
    longer needed. Apache should no longer serve the page for that website once
    I've removed it from puppet.  I want puppet to remove the vhost file in
    /etc/apache2/sites-enabled/. So I set up that directory to be a "file"
    resource (/etc/apache2/sites-enabled/" and use the "purge" option.

    The problem is that every time puppet runs, it deletes *every* vhost file in
    that directory, and then re-creates the ones I have configured using my
    vhost definition. Puppet is unable to realize that the vhost files created
    by my vhost definition are "managed" files, and therefore should not be
    purged in the first place.

    This becomes a BIG problem once I try and manage all the website files for
    each of our hundreds of websites. This adds up to about 1G of data. So every
    time puppet runs, it purges and re-creates about 1G of files for all those
    different websites. This is obviously having a huge performance impact,
    especially where filebucketing is concerned.

    So, I'm trying to figure out a way around this until this bug is fixed
    because I have a feeling I'm gonna be waiting a while. I've had to turn off
    purging all together for now because of the performance issues. So if I
    remove a website from my puppet config, I need to manually go into each
    webserver and remove the related files for that website.

    So to recap, here is an example:

    I want to purge all the vhosts files that are no longer being managed by
    puppet:
    file { "/etc/apache2/sites-available":
    ensure => directory,
    mode => 0755,
    purge => true,
    recurse => true,
    require => Class["webserver::install"],
    notify => Class["webserver::service"],
    }
    You might want to recurselimit => 1.
    I also want to create a vhost file for each site I want configured:

    First I create a definition:

    define webserver::vhost ( $docroot, $serveralias ) {
    .... blah, blah code, create the vhost, etc.
    }
    So how do you manage the file /etc/apache2/sites-available/$name in
    webserver::vhost?
    Then call this definition for each website I want enabled:

    webserver::vhost { 'fakewebsite.com':
    docroot => '/var/www/fakewebsite.com/html',
    serveralias => '*.fakewebsite.com',
    }

    Since the file created by webserver::vhost is managed in a different
    resource than the "file" resource for the /etc/apache2/sites-available
    directory, puppet purges and re-creates this vhost file every time puppet is
    run. The same thing also happens for 1G of every .html, .php, .jpg, etc.
    file that is placed into /var/www/*/html for each website, every time puppet
    runs rsync to check for new versions of the website code/data.
    As long it's file resource in the define type you should be ok.
    The only way I can think of to get around this, is to create my own custom
    resources, and then use the "resources" metaparameter. So if I could define
    my own custom resource for a vhost file, and then set up a vhost file like
    this:

    vhost { 'fakewebsite.com':
    docroot => blah,
    serveralias => blah,
    }

    and then use the "resources" metaparameter:

    resources { 'vhost': purge => true }

    This "should" tell puppet to purge any vhost resources on the *entire*
    system that aren't managed by puppet? Is this the correct way to go about
    this? I am concerned, because if I take this route, it seems like puppet
    will purge everything on the entire server that it believes to be a vhost
    file not managed by puppet. If I mess something up in my vhost resource
    definition, I could potentially destroy my server. And I don't want puppet
    going into /home/ directories and purging backup/test vhost files that users
    may be testing with.

    Any advice?
    You will run into 9277 if you write your own native type for vhost.
    Your custom vhost type needs to support self.instances, then you can
    use purge => true. However look at the gist example, as long you
    specify the file in webserver::vhost, you should be able to purge the
    directory.

    define webserver::vhost {
    file { "/etc/apache2/sites-available/${name}": }
    ...
    }

    Thanks,

    Nan

    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To post to this group, send email to [email protected].
    To unsubscribe from this group, send email to [email protected].
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
  • Miles Stevenson at May 6, 2012 at 4:34 pm
    Thanks. I think the problem was a mismatch in my file resources inside the
    vhost definition. Once I switched it from using "ensure => present", to
    "ensure => file" everything worked how I wanted.
    On Saturday, May 5, 2012 7:17:42 PM UTC, Miles Stevenson wrote:

    I've been experiencing what appears to be a subtle bug in Puppet, and I'm
    wondering if anybody has in idea on a good workaround. The bug report is
    here: http://projects.puppetlabs.com/issues/9277

    I'm using definitions in puppet to manage 3 different sets of file
    resources related to the websites we deploy in our LAMP environment: apache
    modules, vhosts, and website docroots. We need to be able to easily deploy
    hundreds of different websites on different servers. And for each of those
    websites, those are the 3 things that need to get managed (the apache
    modules needed to be installed and active on the server, the vhost
    configuration for that specific website, and the data files for that
    specific website which holds all the .html, .php, etc. files).

    So for each of these entities, I'm using definitions. For example, every
    site needs to have it's own vhost file. So I've create a definition for
    vhosts that takes in the parameters for that particular site, and creates
    the vhost file properly using a .erb template. All of these vhost files get
    placed into /etc/apache2/sites-enabled. 1 file for each website.

    But here's the hard part. When I remove one of the websites from being
    managed by puppet, I want puppet to clean up those resources that are no
    longer needed. Apache should no longer serve the page for that website once
    I've removed it from puppet. I want puppet to remove the vhost file in
    /etc/apache2/sites-enabled/. So I set up that directory to be a "file"
    resource (/etc/apache2/sites-enabled/" and use the "purge" option.

    The problem is that every time puppet runs, it deletes *every* vhost file
    in that directory, and then re-creates the ones I have configured using my
    vhost definition. Puppet is unable to realize that the vhost files created
    by my vhost definition are "managed" files, and therefore should not be
    purged in the first place.

    This becomes a BIG problem once I try and manage all the website files for
    each of our hundreds of websites. This adds up to about 1G of data. So
    every time puppet runs, it purges and re-creates about 1G of files for all
    those different websites. This is obviously having a huge performance
    impact, especially where filebucketing is concerned.

    So, I'm trying to figure out a way around this until this bug is fixed
    because I have a feeling I'm gonna be waiting a while. I've had to turn off
    purging all together for now because of the performance issues. So if I
    remove a website from my puppet config, I need to manually go into each
    webserver and remove the related files for that website.

    So to recap, here is an example:

    I want to purge all the vhosts files that are no longer being managed by
    puppet:
    file { "/etc/apache2/sites-available":
    ensure => directory,
    mode => 0755,
    purge => true,
    recurse => true,
    require => Class["webserver::install"],
    notify => Class["webserver::service"],
    }


    I also want to create a vhost file for each site I want configured:

    First I create a definition:

    define webserver::vhost ( $docroot, $serveralias ) {
    .... blah, blah code, create the vhost, etc.
    }

    Then call this definition for each website I want enabled:

    webserver::vhost { 'fakewebsite.com':
    docroot => '/var/www/fakewebsite.com/html',
    serveralias => '*.fakewebsite.com',
    }

    Since the file created by webserver::vhost is managed in a different
    resource than the "file" resource for the /etc/apache2/sites-available
    directory, puppet purges and re-creates this vhost file every time puppet
    is run. The same thing also happens for 1G of every .html, .php, .jpg, etc.
    file that is placed into /var/www/*/html for each website, every time
    puppet runs rsync to check for new versions of the website code/data.

    The only way I can think of to get around this, is to create my own custom
    resources, and then use the "resources" metaparameter. So if I could define
    my own custom resource for a vhost file, and then set up a vhost file like
    this:

    vhost { 'fakewebsite.com':
    docroot => blah,
    serveralias => blah,
    }

    and then use the "resources" metaparameter:

    resources { 'vhost': purge => true }

    This "should" tell puppet to purge any vhost resources on the *entire*
    system that aren't managed by puppet? Is this the correct way to go about
    this? I am concerned, because if I take this route, it seems like puppet
    will purge everything on the entire server that it believes to be a vhost
    file not managed by puppet. If I mess something up in my vhost resource
    definition, I could potentially destroy my server. And I don't want puppet
    going into /home/ directories and purging backup/test vhost files that
    users may be testing with.

    Any advice?
    --
    You received this message because you are subscribed to the Google Groups "Puppet Users" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/T9RXIzo0r_YJ.
    To post to this group, send email to [email protected].
    To unsubscribe from this group, send email to [email protected].
    For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppuppet-users @
categoriespuppet
postedMay 5, '12 at 7:18p
activeMay 6, '12 at 4:34p
posts3
users2
websitepuppetlabs.com

2 users in discussion

Miles Stevenson: 2 posts Nan Liu: 1 post

People

Translate

site design / logo © 2023 Grokbase