FAQ
Hi all,

In http://goo.gl/Krjfh I read:

+++++++++++++++++++++++
Upgrading from CentOS-4 or CentOS-5:
We recommend everyone run through a reinstall rather than attempt an
inplace upgrade from CentOS-4 or CentOS-5
+++++++++++++++++++++++

Do you ever now if that advice will be up to date for the 6 to 7 upgrade?

What is the preferred upgrade process if some want to upgrade inplace?
I mostly run virtual guest in a one-VM-per-service (MySQL, php, Mail,
DNS, NFS/SMB) basis, with a main + spare physical machine.

I'm installing 6.2 on our dev servers and try to pre-evaluate the amount
of work when 7 will be released.

--
RMA.

Search Discussions

  • Ljubomir Ljubojevic at Feb 7, 2012 at 7:39 am

    On 02/07/2012 07:04 AM, Mihamina Rakotomandimby wrote:
    Hi all,

    In http://goo.gl/Krjfh I read:

    +++++++++++++++++++++++
    Upgrading from CentOS-4 or CentOS-5:
    We recommend everyone run through a reinstall rather than attempt an
    inplace upgrade from CentOS-4 or CentOS-5
    +++++++++++++++++++++++

    Do you ever now if that advice will be up to date for the 6 to 7 upgrade?

    What is the preferred upgrade process if some want to upgrade inplace?
    I mostly run virtual guest in a one-VM-per-service (MySQL, php, Mail,
    DNS, NFS/SMB) basis, with a main + spare physical machine.

    I'm installing 6.2 on our dev servers and try to pre-evaluate the amount
    of work when 7 will be released.
    6.x will be supported until 2020. Reinstalling once in 10 years should
    not be the problem.

    Reinstall is ALWAYS advised, since probably many packages will be either
    depreciated or heavily changed in version 7.0.

    That being said, there will always be unsupported way to upgrade from
    one version to the next.

    It is Your choice in the end.


    --

    Ljubomir Ljubojevic
    (Love is in the Air)
    PL Computers
    Serbia, Europe

    Google is the Mother, Google is the Father, and traceroute is your
    trusty Spiderman...
    StarOS, Mikrotik and CentOS/RHEL/Linux consultant
  • Johnny Hughes at Feb 7, 2012 at 7:58 am

    On 02/07/2012 06:39 AM, Ljubomir Ljubojevic wrote:
    On 02/07/2012 07:04 AM, Mihamina Rakotomandimby wrote:
    Hi all,

    In http://goo.gl/Krjfh I read:

    +++++++++++++++++++++++
    Upgrading from CentOS-4 or CentOS-5:
    We recommend everyone run through a reinstall rather than attempt an
    inplace upgrade from CentOS-4 or CentOS-5
    +++++++++++++++++++++++

    Do you ever now if that advice will be up to date for the 6 to 7 upgrade?

    What is the preferred upgrade process if some want to upgrade inplace?
    I mostly run virtual guest in a one-VM-per-service (MySQL, php, Mail,
    DNS, NFS/SMB) basis, with a main + spare physical machine.

    I'm installing 6.2 on our dev servers and try to pre-evaluate the amount
    of work when 7 will be released.
    6.x will be supported until 2020. Reinstalling once in 10 years should
    not be the problem.

    Reinstall is ALWAYS advised, since probably many packages will be either
    depreciated or heavily changed in version 7.0.

    That being said, there will always be unsupported way to upgrade from
    one version to the next.

    It is Your choice in the end.
    It is also a MAJORLY big deal to move from one major version to another
    (ie a move from CentOS-5.x to CentOS-6.x). This is because there is no
    API/ABI compatibility between major versions like there is for minor
    versions.

    The php is going to be much newer, the samba is going to me much newer,
    the httpd is going to be much newer, the kernel is going to much newer,
    ldap is going to much newer, etc.

    For example, I recently upgraded a CentOS-4 box to CentOS-5 and I went
    from the CentOS-4 php to a CentOS-5 version ... I had to re-code my
    applications written for the php-4.3.9 in CentOS-4 to instead work with
    the php-5.1.6 in CentOS-5. I had to rework all the mod_auth files from
    httpd-2.0.x to work with mod_authz from httpd-2.2.x ... etc.

    The purpose for having enterprise software is so that you can get a
    return on your investment and use your code for 7 years (for CentOS
    versions before CentOS-4 ... now 10 years in post CentOS-5). But
    keeping things for that period of time means that when you do need to
    upgrade, the "differences" are much harder and the changes are usually
    much bigger for a given package.

    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: signature.asc
    Type: application/pgp-signature
    Size: 262 bytes
    Desc: OpenPGP digital signature
    Url : http://lists.centos.org/pipermail/centos/attachments/20120207/7adfbd65/attachment.bin
  • Ross Walker at Feb 7, 2012 at 10:07 am

    On Feb 7, 2012, at 7:58 AM, Johnny Hughes wrote:

    The purpose for having enterprise software is so that you can get a
    return on your investment and use your code for 7 years (for CentOS
    versions before CentOS-4 ... now 10 years in post CentOS-5). But
    keeping things for that period of time means that when you do need to
    upgrade, the "differences" are much harder and the changes are usually
    much bigger for a given package.
    For this reason it is often better to upgrade more frequently then every 7-10 years. Personally I have a 5 year max lifetime for my systems. Even then upgrades are painful and we try to stagger these so they all aren't due to upgrade at once.

    -Ross
  • Craig White at Feb 7, 2012 at 11:02 am

    On Feb 7, 2012, at 8:07 AM, Ross Walker wrote:
    On Feb 7, 2012, at 7:58 AM, Johnny Hughes wrote:

    The purpose for having enterprise software is so that you can get a
    return on your investment and use your code for 7 years (for CentOS
    versions before CentOS-4 ... now 10 years in post CentOS-5). But
    keeping things for that period of time means that when you do need to
    upgrade, the "differences" are much harder and the changes are usually
    much bigger for a given package.
    For this reason it is often better to upgrade more frequently then every 7-10 years. Personally I have a 5 year max lifetime for my systems. Even then upgrades are painful and we try to stagger these so they all aren't due to upgrade at once.
    ----
    if you think about it, perhaps you are making the case for using a configuration management system like puppet where the configuration details are more or less abstracted from the underlying OS itself. Thus once running (and I'm not suggesting that it is a simple task), migrating servers from CentOS 5.x to 6.x or perhaps to Debian or Ubuntu becomes a relatively simple task as the configuration details come from the puppet server.

    This becomes more evident when you stop looking at a server being a single OS install on a single box and start running virtualized servers.

    Craig
  • Les Mikesell at Feb 7, 2012 at 12:38 pm

    On Tue, Feb 7, 2012 at 10:02 AM, Craig White wrote:
    For this reason it is often better to upgrade more frequently then every 7-10 years. Personally I have a 5 year max lifetime for my systems. Even then upgrades are painful and we try to stagger these so they all aren't due to upgrade at once.
    ----
    if you think about it, perhaps you are making the case for using a configuration management system like puppet where the configuration details are more or less abstracted from the underlying OS itself. Thus once running (and I'm not suggesting that it is a simple task), migrating servers from CentOS 5.x to 6.x or perhaps to Debian or Ubuntu becomes a relatively simple task as the configuration details come from the puppet server.
    If it is possible to abstract the differences, perhaps you aren't
    using all the new features and didn't have to upgrade after all...

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Craig White at Feb 7, 2012 at 2:11 pm

    On Feb 7, 2012, at 10:38 AM, Les Mikesell wrote:
    On Tue, Feb 7, 2012 at 10:02 AM, Craig White wrote:

    For this reason it is often better to upgrade more frequently then every 7-10 years. Personally I have a 5 year max lifetime for my systems. Even then upgrades are painful and we try to stagger these so they all aren't due to upgrade at once.
    ----
    if you think about it, perhaps you are making the case for using a configuration management system like puppet where the configuration details are more or less abstracted from the underlying OS itself. Thus once running (and I'm not suggesting that it is a simple task), migrating servers from CentOS 5.x to 6.x or perhaps to Debian or Ubuntu becomes a relatively simple task as the configuration details come from the puppet server.
    If it is possible to abstract the differences, perhaps you aren't
    using all the new features and didn't have to upgrade after all...
    ----
    I suppose that if you believe that, then you are suffering from a lack of imagination. I can deploy LDAP authentication setups to either Ubuntu or CentOS with the various pam, nss, padl files which are vastly different in no time.

    some of the differences can be accounted for from within puppet itself but others - and I'm talking about actual config files - the differences can be handled from within the templated config files which have enough business logic to change the output to various needs or simply use different templates altogether.

    Of course there is an investment to get to this stage and if you've only got a handful of servers to upgrade, it may not be worth it but there is the satisfaction of knowing the configuration files are ensured to be what you intended them to be - to the point of if someone makes changes by hand, they are automatically changed back.

    I'm only expressing the notion that it is entirely possible to get beyond the paradigm of locked in server installs on iron that takes a lot of effort to maintain (ie, update/upgrade X number_of_servers). There are some very sophisticated configuration management system, chef looked good, I chose to go with puppet and I've been very pleased with the depth and scope of puppet.

    Craig
  • Les Mikesell at Feb 7, 2012 at 2:38 pm

    On Tue, Feb 7, 2012 at 1:11 PM, Craig White wrote:
    If it is possible to abstract the differences, perhaps you aren't
    using all the new features and didn't have to upgrade after all...
    ----
    I suppose that if you believe that, then you are suffering from a lack of imagination. I can deploy LDAP authentication setups to either Ubuntu or CentOS with the various pam, nss, padl files which are vastly different in no time.
    How well does it handle windows?
    I'm only expressing the notion that it is entirely possible to get beyond the paradigm of locked in server installs on iron that takes a lot of effort to maintain (ie, update/upgrade X number_of_servers). There are some very sophisticated configuration management system, chef looked good, I chose to go with puppet and I've been very pleased with the depth and scope of puppet.
    I'm actually very interested in this, but puppet did not look like the
    right architecture. http://saltstack.org/ might not be quite ready
    for prime time but it looks like a very reasonable design. The python
    dependencies are probably going going to be painful for cross platform
    installs but at least someone on its mail list has it working on
    windows and there are already epel packages.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Craig White at Feb 7, 2012 at 3:36 pm

    On Feb 7, 2012, at 12:38 PM, Les Mikesell wrote:
    On Tue, Feb 7, 2012 at 1:11 PM, Craig White wrote:


    If it is possible to abstract the differences, perhaps you aren't
    using all the new features and didn't have to upgrade after all...
    ----
    I suppose that if you believe that, then you are suffering from a lack of imagination. I can deploy LDAP authentication setups to either Ubuntu or CentOS with the various pam, nss, padl files which are vastly different in no time.
    How well does it handle windows?
    ----
    I haven't tried but I gather that at this stage, only a subset of features are working on Windows at this point. It does seem that they are committed to the platform though and have been adding features with each release.
    ----
    I'm only expressing the notion that it is entirely possible to get beyond the paradigm of locked in server installs on iron that takes a lot of effort to maintain (ie, update/upgrade X number_of_servers). There are some very sophisticated configuration management system, chef looked good, I chose to go with puppet and I've been very pleased with the depth and scope of puppet.
    I'm actually very interested in this, but puppet did not look like the
    right architecture. http://saltstack.org/ might not be quite ready
    for prime time but it looks like a very reasonable design. The python
    dependencies are probably going going to be painful for cross platform
    installs but at least someone on its mail list has it working on
    windows and there are already epel packages.
    ----
    a different type of management system. Puppet & Chef are simply about configuration management.

    Puppet architecture is pretty awesome - but the puppet master itself can't be a stock CentOS 5.x system because ruby 1.8.5 is too ancient. I suppose you can use Karanbir's ruby-1.8.7 packages (or better yet, enterprise ruby packages) if you insist on running the server on CentOS 5.x. The thing about puppet is that the barrier to entry is rather high - it takes some time before you get to something useful whereas Chef is more adept at putting other people's recipes into service fairly quickly. Then again, you will run into barriers with Chef that don't exist with puppet so it seemed that the ramp up investment had long term benefits.

    Craig
  • Les Mikesell at Feb 7, 2012 at 4:00 pm

    On Tue, Feb 7, 2012 at 2:36 PM, Craig White wrote:
    I'm actually very interested in this, but puppet did not look like the
    right architecture. ? http://saltstack.org/ might not be quite ready
    for prime time but it looks like a very reasonable design. ?The python
    dependencies are probably going going to be painful for cross platform
    installs but at least someone on its mail list has it working on
    windows and there are already epel packages.
    ----
    a different type of management system. Puppet & Chef are simply about configuration management.
    So is salt, but scalable, and with the ability to make decisions based
    on client state in more or less real time. And even though it is
    mostly or all python now, it really passes around data structures that
    should allow other languages to be used. It is still in early stages
    but they claim to have converted some puppet installs easily.
    Puppet architecture is pretty awesome - but the puppet master itself can't be a stock CentOS 5.x system because ruby 1.8.5 is too ancient. I suppose you can use Karanbir's ruby-1.8.7 packages (or better yet, enterprise ruby packages) if you insist on running the server on CentOS 5.x. The thing about puppet is that the barrier to entry is rather high - it takes some time before you get to something useful whereas Chef is more adept at putting other people's recipes into service fairly quickly. Then again, you will run into barriers with Chef that don't exist with puppet so it seemed that the ramp up investment had long term benefits.
    Ruby seems like the only thing that might be worse than python in
    terms of long-term version incompatibilities and installation
    problems, although python is sort-of a special case on RH systems
    since the install tools need it. I think something I wrote 20 years
    ago should still run today, but maybe that's just me. And I didn't
    see any way to tier puppet masters or keep it from falling over with a
    large number of clients.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Craig White at Feb 7, 2012 at 4:10 pm

    On Feb 7, 2012, at 2:00 PM, Les Mikesell wrote:

    Ruby seems like the only thing that might be worse than python in
    terms of long-term version incompatibilities and installation
    problems, although python is sort-of a special case on RH systems
    since the install tools need it. I think something I wrote 20 years
    ago should still run today, but maybe that's just me. And I didn't
    see any way to tier puppet masters or keep it from falling over with a
    large number of clients.
    ----
    seems to me that a lot of the people who love perl also love ruby - learning curve is not steep.

    puppet clients are forgiving - you can use stock ruby from CentOS 5

    puppet manifests won't expire because of changes in ruby rather because of changes in puppet but a startup at this point should be fine for many years as the path forward seems pretty well defined.

    There's a lot of scaling possibilities for puppet master and a single master should be able to handle 200-300 servers without much difficulty and there are organizations that scale well into the thousands on puppet but yes, that does require some sophistication. FWIW, I'm just a hair under 50 servers and I'm running the puppet master on a VMWare image of 768MB.

    Craig
  • Les Mikesell at Feb 7, 2012 at 4:35 pm

    On Tue, Feb 7, 2012 at 3:10 PM, Craig White wrote:
    puppet manifests won't expire because of changes in ruby rather because of changes in puppet but a startup at this point should be fine for many years as the path forward seems pretty well defined.
    Does it keep a self-contained library or is it subject to package
    updates and future incompatibilities? I don't know much about ruby
    but the guy here who uses it wants nothing to do with packaged
    versions or anything that will either be 'too old' or break things
    with updates. Things like that make me very nervous. If today's and
    yesterday's version of a language have to be different they were
    probably both wrong.
    There's a lot of scaling possibilities for puppet master and a single master should be able to handle 200-300 servers without much difficulty and there are organizations that scale well into the thousands on puppet but yes, that does require some sophistication. FWIW, I'm just a hair under 50 servers and I'm running the puppet master on a VMWare image of 768MB.
    I'd need it to do a couple thousand, across a bunch of platforms and
    I'd rather not fight with it to get there. I do have ocsinventory
    agents reporting to a single server, but that's basically one http
    post a day with randomized timing so not even close to the same
    problem. And the even bigger issue will be making it coordinate with
    our 'human' process and scheduling controls.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Lamar Owen at Feb 7, 2012 at 4:46 pm

    On Tuesday, February 07, 2012 04:35:29 PM Les Mikesell wrote:
    If today's and
    yesterday's version of a language have to be different they were
    probably both wrong.
    Like Python2.x versus 3.x? Or even 2.4 versus 2.6? Plone, for one, is still bundling older Python due to incompatibilities with Zope and newer Python.
  • Les Mikesell at Feb 7, 2012 at 5:25 pm

    On Tue, Feb 7, 2012 at 3:46 PM, Lamar Owen wrote:
    On Tuesday, February 07, 2012 04:35:29 PM Les Mikesell wrote:
    If today's and
    yesterday's version of a language have to be different they were
    probably both wrong.
    Like Python2.x versus 3.x? ?Or even 2.4 versus 2.6? ?Plone, for one, is still bundling older Python due to incompatibilities with Zope and newer Python.
    Exactly, and without looking too closely ruby seems to be changing
    even faster. There is not going to be a perfect solution to this
    problem, especially if you consider separately packaged libraries that
    really have to change over time, but RPM needs to handle concurrent
    multi-versioned targets gracefully or they should just change the name
    when it is not the same language anymore and won't execute its own old
    syntax so the packages don't conflict.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Craig White at Feb 7, 2012 at 5:43 pm

    On Feb 7, 2012, at 2:35 PM, Les Mikesell wrote:
    On Tue, Feb 7, 2012 at 3:10 PM, Craig White wrote:

    puppet manifests won't expire because of changes in ruby rather because of changes in puppet but a startup at this point should be fine for many years as the path forward seems pretty well defined.
    Does it keep a self-contained library or is it subject to package
    updates and future incompatibilities? I don't know much about ruby
    but the guy here who uses it wants nothing to do with packaged
    versions or anything that will either be 'too old' or break things
    with updates. Things like that make me very nervous. If today's and
    yesterday's version of a language have to be different they were
    probably both wrong.
    ----
    we are very much a ruby factory here and pretty much use enterprise ruby across the board (CentOS & Ubuntu)

    http://www.rubyenterpriseedition.com/

    which is far from the newest but is entirely predictable and very performance tuned to running our web apps. Just seemed easier to use the same version across the board.

    Puppet itself can work with any reasonable version of ruby...

    - 1.8.7 to 1.9.3 /server (technically, you can run the puppet master on 1.8.5 but that would pretty much preclude theforeman & dashboard, and I make heavy use of theforeman).

    - 1.8.5+ /client

    and so the changes in ruby language are really just a matter for puppet itself, which I would believe you would call it a self-contained library. The future is always difficult to predict and if I had that gift, I wouldn't be working but rather making a killing on sports bets.

    theforeman takes puppet up a notch...
    http://theforeman.org/

    Craig

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcentos @
categoriescentos
postedFeb 7, '12 at 1:04a
activeFeb 7, '12 at 5:43p
posts15
users7
websitecentos.org
irc#centos

People

Translate

site design / logo © 2022 Grokbase