FAQ
Hi,

Does somebody know when RHEL 5.3 will be released?
What is generally the delay between RHEL/CentOS releases?

Regards
Alain
--
La version fran?aise des pages de manuel Linux
http://manpagesfr.free.fr

Search Discussions

  • Karanbir Singh at Jan 16, 2009 at 2:27 pm

    Alain PORTAL wrote:
    Does somebody know when RHEL 5.3 will be released?
    What is generally the delay between RHEL/CentOS releases?
    The official 5.3 beta period is now over, so it would be a fair guess to
    say we expect 5.3 anytime from early to end Feb.

    The aim is to get CentOS-5.3 released within a few weeks of public
    release upstream.

    --
    Karanbir Singh
    CentOS Project { http://www.centos.org/ }
    irc: z00dax, #centos at irc.freenode.net
  • Alain PORTAL at Jan 16, 2009 at 2:32 pm

    Le vendredi 16 janvier 2009 ? 15:27, Karanbir Singh a ?crit?:

    The official 5.3 beta period is now over, so it would be a fair guess to
    say we expect 5.3 anytime from early to end Feb.
    Humm... I though it was at the beginning of january.
    The aim is to get CentOS-5.3 released within a few weeks of public
    release upstream.
    What means "a few"? ;-)
    Well, I have to understand that Centos 5.3 will be release near the end of
    march, I can't wait a such long time.
    So, I'll install 5.2

    Thanks!
    Alain
    --
    La version fran?aise des pages de manuel Linux
    http://manpagesfr.free.fr
  • Karanbir Singh at Jan 16, 2009 at 2:38 pm

    Alain PORTAL wrote:
    So, I'll install 5.2
    you might want to look into exactly what the .2 and .3 signify in there.

    --
    Karanbir Singh
    CentOS Project { http://www.centos.org/ }
    irc: z00dax, #centos at irc.freenode.net
  • Scott Silva at Jan 16, 2009 at 5:27 pm

    on 1-16-2009 6:32 AM Alain PORTAL spake the following:
    Le vendredi 16 janvier 2009 ? 15:27, Karanbir Singh a ?crit?:
    The official 5.3 beta period is now over, so it would be a fair guess to
    say we expect 5.3 anytime from early to end Feb.
    Humm... I though it was at the beginning of january.
    The aim is to get CentOS-5.3 released within a few weeks of public
    release upstream.
    What means "a few"? ;-)
    Well, I have to understand that Centos 5.3 will be release near the end of
    march, I can't wait a such long time.
    So, I'll install 5.2
    When 5.3 is released there is a "magic" incantation that will transform your
    5.2 into 5.3!

    Here it is;

    Are you ready?


    "yum update"

    Easy, huh?



    --
    MailScanner is like deodorant...
    You hope everybody uses it, and
    you notice quickly if they don't!!!!

    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: signature.asc
    Type: application/pgp-signature
    Size: 258 bytes
    Desc: OpenPGP digital signature
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090116/6cc29c43/attachment.bin
  • Alain PORTAL at Jan 16, 2009 at 5:59 pm

    Le vendredi 16 janvier 2009, Scott Silva a ?crit :
    When 5.3 is released there is a "magic" incantation that will transform
    your 5.2 into 5.3!

    Here it is;

    Are you ready?


    "yum update"

    Easy, huh?
    Are you really sure about that?

    --
    Les pages de manuel Linux en fran?ais
    http://manpagesfr.free.fr/
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: This is a digitally signed message part.
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090116/31f75f16/attachment.bin
  • Seth vidal at Jan 16, 2009 at 6:09 pm

    On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Scott Silva a ?crit :
    When 5.3 is released there is a "magic" incantation that will transform
    your 5.2 into 5.3!

    Here it is;

    Are you ready?


    "yum update"

    Easy, huh?
    Are you really sure about that?
    I am.

    :)
    -sv
  • Alain PORTAL at Jan 16, 2009 at 6:16 pm

    Le vendredi 16 janvier 2009, seth vidal a ?crit :
    On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:

    Are you really sure about that?
    I am.

    :)
    OK. If you say it, I trust you ;-)
    I thought an upgrade was needed.

    Regards
    Alain
    --
    Les pages de manuel Linux en fran?ais
    http://manpagesfr.free.fr/
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: This is a digitally signed message part.
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090116/cb262356/attachment.bin
  • Seth vidal at Jan 16, 2009 at 6:23 pm

    On Fri, 2009-01-16 at 19:16 +0100, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, seth vidal a ?crit :
    On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:

    Are you really sure about that?
    I am.

    :)
    OK. If you say it, I trust you ;-)
    I thought an upgrade was needed.
    a long time ago that was potentially true.

    the difference between yum update and yum upgrade was whether or not
    obsoletes were processed. In an upgrade they were, in an update they
    defaulted to not. This was necessary in a world where mutually
    obsoleting pkgs were allowed. Since about rhel4 and beyond no one has
    been letting a distro out the door with mutually obsoleting pkgs. So it
    was not longer a big deal.

    obsoletes=1 is now the yum default.

    Hope that helps.
    -sv
  • Alain PORTAL at Jan 16, 2009 at 7:49 pm

    Le vendredi 16 janvier 2009, seth vidal a ?crit :
    On Fri, 2009-01-16 at 19:16 +0100, Alain PORTAL wrote:

    OK. If you say it, I trust you ;-)
    I thought an upgrade was needed.
    a long time ago that was potentially true.

    the difference between yum update and yum upgrade was whether or not
    obsoletes were processed. In an upgrade they were, in an update they
    defaulted to not. This was necessary in a world where mutually
    obsoleting pkgs were allowed. Since about rhel4 and beyond no one has
    been letting a distro out the door with mutually obsoleting pkgs. So it
    was not longer a big deal.

    obsoletes=1 is now the yum default.

    Hope that helps.
    For understanding, yes.

    Thanks
    --
    Les pages de manuel Linux en fran?ais
    http://manpagesfr.free.fr/
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: This is a digitally signed message part.
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090116/e0fcef81/attachment.bin
  • Les Mikesell at Jan 16, 2009 at 6:33 pm

    Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Scott Silva a ?crit :
    When 5.3 is released there is a "magic" incantation that will transform
    your 5.2 into 5.3!

    Here it is;

    Are you ready?


    "yum update"

    Easy, huh?
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger
    changes.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Alain PORTAL at Jan 16, 2009 at 7:51 pm

    Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
    Alain PORTAL wrote:
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger
    changes.
    No confusion for me.
    I understood that upgrading X.Y -> X+1.0 is a bad idea

    --
    Les pages de manuel Linux en fran?ais
    http://manpagesfr.free.fr/
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: This is a digitally signed message part.
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090116/baa578c0/attachment.bin
  • Charlie Brady at Jan 21, 2009 at 2:15 pm

    On Fri, 16 Jan 2009, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
    Alain PORTAL wrote:
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger
    changes.
    No confusion for me.
    I understood that upgrading X.Y -> X+1.0 is a bad idea
    I don't think it is a bad idea. I just think that sometimes there are some
    problems, or RedHat is not prepared to say that it will work. CentOS 3 ->
    4 worked for me, using 'upgradeany' option to anaconda.
  • Dag Wieers at Jan 21, 2009 at 2:19 pm

    On Wed, 21 Jan 2009, Charlie Brady wrote:
    On Fri, 16 Jan 2009, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
    Alain PORTAL wrote:
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger
    changes.
    No confusion for me.
    I understood that upgrading X.Y -> X+1.0 is a bad idea
    I don't think it is a bad idea. I just think that sometimes there are some
    problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4
    worked for me, using 'upgradeany' option to anaconda.
    Feel surprised to find in the RHEL 5.3 release notes the following
    statement:

    ----
    While anaconda's "upgrade" option will perform an upgrade from
    Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux
    5.3, there is no guarantee that the upgrade will preserve all
    of a system's settings, services, and custom configurations.
    For this reason, Red Hat recommends that you perform a fresh
    installation rather than an upgrade.
    ----

    So they are advising to reinstall RHEL 5.3 even when you're running RHEL
    5.2. Which seems scary to me. I would hope this is mostly because of the
    many Xen improvements and thus mostly for their Advanced Platform.

    But still, this is certainly no good evolution.

    --
    -- dag wieers, dag at centos.org, http://dag.wieers.com/ --
    [Any errors in spelling, tact or fact are transmission errors]
  • James Antill at Jan 21, 2009 at 3:59 pm

    On Wed, 2009-01-21 at 15:19 +0100, Dag Wieers wrote:
    Feel surprised to find in the RHEL 5.3 release notes the following
    statement:

    ----
    While anaconda's "upgrade" option will perform an upgrade from
    Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux
    5.3, there is no guarantee that the upgrade will preserve all
    of a system's settings, services, and custom configurations.
    For this reason, Red Hat recommends that you perform a fresh
    installation rather than an upgrade.
    ----

    So they are advising to reinstall RHEL 5.3 even when you're running RHEL
    5.2.
    *speaking _for me / as me_, as always, etc.*

    I don't see the above text in the release notes, but what I do see is
    the top section of:

    https://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_Notes/sect-Release_Notes-Installation_Related_Notes.html

    ...which implies (to me) that the text you quoted is saying something
    like "although a 4.7 => 5.3 upgrade has the same UI as 5.2 => 5.3, you
    shouldn't expect it to work as well in all cases".

    And, of course, that's all anaconda specific ... going 5.2 => 5.3 via.
    "yum update" is expected to just work.

    --
    James Antill <james at fedoraproject.org>
    Fedora
  • Dag Wieers at Jan 21, 2009 at 4:11 pm

    On Wed, 21 Jan 2009, James Antill wrote:
    On Wed, 2009-01-21 at 15:19 +0100, Dag Wieers wrote:
    Feel surprised to find in the RHEL 5.3 release notes the following
    statement:

    ----
    While anaconda's "upgrade" option will perform an upgrade from
    Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux
    5.3, there is no guarantee that the upgrade will preserve all
    of a system's settings, services, and custom configurations.
    For this reason, Red Hat recommends that you perform a fresh
    installation rather than an upgrade.
    ----

    So they are advising to reinstall RHEL 5.3 even when you're running RHEL
    5.2.
    *speaking _for me / as me_, as always, etc.*

    I don't see the above text in the release notes, but what I do see is
    the top section of:

    https://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_Notes/sect-Release_Notes-Installation_Related_Notes.html

    ...which implies (to me) that the text you quoted is saying something
    like "although a 4.7 => 5.3 upgrade has the same UI as 5.2 => 5.3, you
    shouldn't expect it to work as well in all cases".

    And, of course, that's all anaconda specific ... going 5.2 => 5.3 via.
    "yum update" is expected to just work.
    I meant announcement, not release notes. See:

    https://www.redhat.com/archives/rhelv5-announce/2009-January/msg00000.html

    If "yum updated" is expected to work, the quoted paragraph is very bad
    worded. This is the kind of thing the Ubuntu people are using against RPM
    based distributions.

    And how ill-informed it may be, it is better to avoid than to remedy.

    --
    -- dag wieers, dag at centos.org, http://dag.wieers.com/ --
    [Any errors in spelling, tact or fact are transmission errors]
  • Manuel Wolfshant at Jan 21, 2009 at 2:34 pm

    Charlie Brady wrote:
    On Fri, 16 Jan 2009, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
    Alain PORTAL wrote:
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much
    bigger
    changes.
    No confusion for me.
    I understood that upgrading X.Y -> X+1.0 is a bad idea
    I don't think it is a bad idea. I just think that sometimes there are
    some problems, or RedHat is not prepared to say that it will work.
    CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
    two days ago I've done 3.5 -> 5.2. via anaconda (upgradeany). no real
    problems except lack of X drivers after install.
  • John Summerfield at Jan 22, 2009 at 3:09 pm

    Manuel Wolfshant wrote:
    Charlie Brady wrote:
    I don't think it is a bad idea. I just think that sometimes there are
    some problems, or RedHat is not prepared to say that it will work.
    CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
    two days ago I've done 3.5 -> 5.2. via anaconda (upgradeany). no real
    problems except lack of X drivers after install.
    The main problem I see is that sometimes packages get replaced by others.

    For example. 2.1 contained wu-imap server. I think from 3 on (and
    certainly in 4, I've not actually installed 3 anywhere), wu-imap was
    dropped and now we can choose between cyrus imap and dovecot.

    Similarly, wu-ftpd was dropped at some point.

    When these package substitutions are made, there is no chance at all of
    the old configuration being translated into the new.

    And then there's postgresql. One has to backup one's data before
    upgrading major postgresql releases and then restore into the new.

    --

    Cheers
    John

    -- spambait
    1aaaaaaa at coco.merseine.nu Z1aaaaaaa at coco.merseine.nu
    -- Advice
    http://webfoot.com/advice/email.top.php
    http://www.catb.org/~esr/faqs/smart-questions.html
    http://support.microsoft.com/kb/555375

    You cannot reply off-list:-)
  • Charlie Brady at Jan 22, 2009 at 4:00 pm

    On Fri, 23 Jan 2009, John Summerfield wrote:

    And then there's postgresql. One has to backup one's data before
    upgrading major postgresql releases and then restore into the new.
    I consider that a major upstream bug.

    However, at the least a %pre script should create an SQL dump before
    upgrading major releases, so user is not left with an unusable blob.

    Better would be for postgresql to ship a standalone SQL dumper, which can
    read old file formats.
  • Charlie Brady at Jan 22, 2009 at 4:11 pm

    On Thu, 22 Jan 2009, Charlie Brady wrote:

    Better would be for postgresql to ship a standalone SQL dumper...
    i.e. one which is self contained, and doesn't require a running
    postmaster. openldap's slapcat is such a beast, for ldap backend to LDIF
    dumping.
  • Peter Hopfgartner at Jan 22, 2009 at 5:05 pm

    Charlie Brady wrote:
    On Thu, 22 Jan 2009, Charlie Brady wrote:

    Better would be for postgresql to ship a standalone SQL dumper...
    There is an ongoing effort to create an in-place-upgrade for PostgreSQL,
    http://wiki.postgresql.org/images/1/17/Pg_upgrade.pdf


    Regards,

    Peter
    i.e. one which is self contained, and doesn't require a running
    postmaster. openldap's slapcat is such a beast, for ldap backend to LDIF
    dumping.
    _______________________________________________
    CentOS-devel mailing list
    CentOS-devel at centos.org
    http://lists.centos.org/mailman/listinfo/centos-devel

    --

    Dott. Peter Hopfgartner

    R3 GIS Srl - GmbH
    Via Johann Kravogl-Str. 2
    I-39012 Meran/Merano (BZ)
    Email: peter.hopfgartner at r3-gis.com
    Tel. : +39 0473 494949
    Fax : +39 0473 069902
    www : http://www.r3-gis.com
  • Joshua Kramer at Jan 22, 2009 at 5:09 pm

    I consider that a major upstream bug.
    Better would be for postgresql to ship a standalone SQL dumper, which can
    read old file formats.
    Charlie,

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?

    Any major database version upgrade requires the attention of a qualified
    DBA who knows how to test data and applications against the new DB
    version, and then dump/upgrade/restore.

    For example, PostgreSQL introduced some minor syntactical differences with
    8.3. If your application uses the features affected by these changes, it
    would be impossible to simply 'dump/restore' without some massaging of the
    data and the application.

    PostgreSQL does ship with a dumper, pg_dump. If you have the current
    version of postmaster, then you use pg_dump to connect to that and dump
    your data in a version-agnostic format. IMHO, the effort of writing a
    standalone dumper that can recognize all the old file formats is not worth
    it, because it is a mistake to delete the old version of postmaster off
    your system before you've done a dump of the database.

    Cheers,
    -Josh

    --

    -----
    http://www.globalherald.net/jb01
    GlobalHerald.NET, the Smarter Social Network! (tm)
  • Les Mikesell at Jan 22, 2009 at 5:25 pm

    Joshua Kramer wrote:
    Any major database version upgrade requires the attention of a qualified
    DBA who knows how to test data and applications against the new DB
    version, and then dump/upgrade/restore.

    For example, PostgreSQL introduced some minor syntactical differences with
    8.3. If your application uses the features affected by these changes, it
    would be impossible to simply 'dump/restore' without some massaging of the
    data and the application.

    PostgreSQL does ship with a dumper, pg_dump. If you have the current
    version of postmaster, then you use pg_dump to connect to that and dump
    your data in a version-agnostic format. IMHO, the effort of writing a
    standalone dumper that can recognize all the old file formats is not worth
    it, because it is a mistake to delete the old version of postmaster off
    your system before you've done a dump of the database.
    So how do you package such a thing in RPM so it can permit both new and
    old instances to run simultaneously while you do all of this required
    testing? I suppose these days virtualbox is an almost-reasonable answer
    but it just seems wrong to have a system that by design doesn't let you
    test a new instance before replacing the old one.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Jeff Johnson at Jan 22, 2009 at 5:48 pm

    On Jan 22, 2009, at 12:25 PM, Les Mikesell wrote:

    Joshua Kramer wrote:
    Any major database version upgrade requires the attention of a
    qualified
    DBA who knows how to test data and applications against the new DB
    version, and then dump/upgrade/restore.

    For example, PostgreSQL introduced some minor syntactical
    differences with
    8.3. If your application uses the features affected by these
    changes, it
    would be impossible to simply 'dump/restore' without some massaging
    of the
    data and the application.

    PostgreSQL does ship with a dumper, pg_dump. If you have the current
    version of postmaster, then you use pg_dump to connect to that and
    dump
    your data in a version-agnostic format. IMHO, the effort of
    writing a
    standalone dumper that can recognize all the old file formats is
    not worth
    it, because it is a mistake to delete the old version of postmaster
    off
    your system before you've done a dump of the database.
    So how do you package such a thing in RPM so it can permit both new
    and
    old instances to run simultaneously while you do all of this required
    testing? I suppose these days virtualbox is an almost-reasonable
    answer
    but it just seems wrong to have a system that by design doesn't let
    you
    test a new instance before replacing the old one.
    Historical note:
    A long time ago (RHL 5.2 iirc) transparent upgrades
    of postgres databases was attempted within *.rpm
    packaging. The result was a total disaster.

    Don't attempt the database conversion while upgrading is the moral.

    Arrange paths in postgres packaging so that both old <-> new utilities
    are available when needed. That can most easily be done by
    including whatever old utilities are needed in the new package
    so that the conversion can be done after the old -> new upgrade.

    Alternatively, one can also attempt multiple installs of postgres
    side-by-side kinda like kernel packages are done.

    hth

    73 de Jeff
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090122/c110baa1/attachment.bin
  • Alan Bartlett at Jan 22, 2009 at 5:54 pm
    Guys,

    This is the CentOS-devel list. Will you please take this discussion to the
    general list.

    Thanks.
    Alan.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-devel/attachments/20090122/f4d8c77f/attachment.html
  • Joshua Kramer at Jan 22, 2009 at 6:39 pm

    So how do you package such a thing in RPM so it can permit both new and
    old instances to run simultaneously while you do all of this required
    testing? I suppose these days virtualbox is an almost-reasonable answer
    I think this discussion is a reflection of our different environments. :)

    On my websites... when 8.3 came out, I downloaded it to a test machine. I
    then did a dump of the production data from 8.2, and did an import into my
    8.3 test machine. After pointing an Apache dev instance at the test
    database, I could verify that my applications still worked, and make any
    code changes that were required.

    After I had a test/dev environment that was stable under 8.3, I planned
    the migration: 1. Dump 8.2; 2. Shutdown 8.2 and remove packages; 3. Move
    8.2's data directory; 4. Install 8.3 packages, and initdb; 5. Import data
    made during the dump and start db; 6. Migrate code changes to web server.
    After things baked for a week and there were no errors, I deleted the old
    8.2 data directories.

    I realize that this is much more difficult if you're using a VM on a web
    host that only allows one machine. Is this the type of environment that
    is constraining you? As long as you can test your application under the
    new database version to make sure it's OK, the migration can be done on
    one machine. But let me ask: in what case would you not want to test your
    application against a new database version?

    --Josh

    --

    -----
    http://www.globalherald.net/jb01
    GlobalHerald.NET, the Smarter Social Network! (tm)
  • Les Mikesell at Jan 22, 2009 at 7:00 pm

    Joshua Kramer wrote:
    So how do you package such a thing in RPM so it can permit both new and
    old instances to run simultaneously while you do all of this required
    testing? I suppose these days virtualbox is an almost-reasonable answer
    I think this discussion is a reflection of our different environments. :)
    I see it as a generic problem with RPM packaging/deployment that you are
    forced to work around by maintaining duplicate equipment.
    On my websites... when 8.3 came out, I downloaded it to a test machine. I
    then did a dump of the production data from 8.2, and did an import into my
    8.3 test machine. After pointing an Apache dev instance at the test
    database, I could verify that my applications still worked, and make any
    code changes that were required.
    That's great if you have a test machine for every application. For an
    important production web site it kind of goes with the territory.
    After I had a test/dev environment that was stable under 8.3, I planned
    the migration: 1. Dump 8.2; 2. Shutdown 8.2 and remove packages; 3. Move
    8.2's data directory; 4. Install 8.3 packages, and initdb; 5. Import data
    made during the dump and start db; 6. Migrate code changes to web server.
    After things baked for a week and there were no errors, I deleted the old
    8.2 data directories.
    What would you do for something simple that doesn't justify buying a
    duplicate machine, yet is probably even more likely to break from a
    version change?
    I realize that this is much more difficult if you're using a VM on a web
    host that only allows one machine. Is this the type of environment that
    is constraining you?
    I'd just like to see a realistic approach to updates via packages.
    As long as you can test your application under the
    new database version to make sure it's OK, the migration can be done on
    one machine. But let me ask: in what case would you not want to test your
    application against a new database version?
    I do want the ability to test while still running the old version. I
    just don't see how that is possible with any RPM-deployed package
    without having duplicate hardware or virtual machines. Postgresql makes
    a good example because the conversion needs both new and old code
    present for different steps, and 8.2->8.3 especially so because some
    implict casts were removed that can break client code in odd ways, but
    the principle is the same for any change where you want to know the new
    version works in your environment before the old one is shut down. If
    you build from source you can make it use different locations and ports
    and run concurrently, but with RPM binaries you can't.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Jeff Johnson at Jan 22, 2009 at 7:05 pm

    On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
    I'd just like to see a realistic approach to updates via packages.
    Reality check:

    You have a postgres upstream devel with years of experience
    packaging postgres and me both saying
    Don't attempt postgres database upgrades in packaging.

    But create your own virtuality reality approach if you want.

    Have fun!

    73 de Jeff
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090122/1fcda276/attachment.bin
  • Les Mikesell at Jan 22, 2009 at 7:23 pm

    Jeff Johnson wrote:
    On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:


    I'd just like to see a realistic approach to updates via packages.
    Reality check:

    You have a postgres upstream devel with years of experience
    packaging postgres and me both saying
    Don't attempt postgres database upgrades in packaging.

    But create your own virtuality reality approach if you want.
    I think you missed my point, which is that RPM packaging doesn't provide
    facilities for what needs to be done. Postgres upstream is just more
    honest than most in recognizing the problem. It's not the only thing
    that ever has non-backward-compatible updates.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Jeff Johnson at Jan 22, 2009 at 9:04 pm

    On Jan 22, 2009, at 2:23 PM, Les Mikesell wrote:

    Jeff Johnson wrote:
    On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:


    I'd just like to see a realistic approach to updates via packages.
    Reality check:

    You have a postgres upstream devel with years of experience
    packaging postgres and me both saying
    Don't attempt postgres database upgrades in packaging.

    But create your own virtuality reality approach if you want.
    I think you missed my point, which is that RPM packaging doesn't
    provide
    facilities for what needs to be done. Postgres upstream is just more
    honest than most in recognizing the problem. It's not the only thing
    that ever has non-backward-compatible updates.
    I believe that one can easily conclude
    RPM packaging doesn't provide facilities for what needs to be done
    from
    Don't attempt postgres database upgrades in packaging.

    You the one who wishes
    I'd just like to see a realistic approach to updates via packages.

    That Does Not Compute with current RPM facilities and existing postgres
    upgrade mechanisms.

    73 de Jeff
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090122/ce780055/attachment.bin
  • Les Mikesell at Jan 22, 2009 at 9:40 pm

    Jeff Johnson wrote:
    But create your own virtuality reality approach if you want.
    I think you missed my point, which is that RPM packaging doesn't provide
    facilities for what needs to be done. Postgres upstream is just more
    honest than most in recognizing the problem. It's not the only thing
    that ever has non-backward-compatible updates.
    I believe that one can easily conclude
    RPM packaging doesn't provide facilities for what needs to be done
    from
    Don't attempt postgres database upgrades in packaging.

    You the one who wishes
    I'd just like to see a realistic approach to updates via packages.
    Meaning I'd like RPM to be changed so multiple versions of packages
    could co-exist, as is often necessary in practice.
    That Does Not Compute with current RPM facilities and existing postgres
    upgrade mechanisms.
    Agreed, it doesn't work. Nor does any other RPM-managed update where
    you need to have both old and new packages simultaneously working for a
    while. The special case for the kernel is about the only place where it
    even attempts to keep old versions around for an emergency fallback.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Jeff Johnson at Jan 22, 2009 at 9:49 pm

    On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:
    Agreed, it doesn't work. Nor does any other RPM-managed update where
    you need to have both old and new packages simultaneously working
    for a
    while. The special case for the kernel is about the only place
    where it
    even attempts to keep old versions around for an emergency fallback.
    Honking about RPM deficiencies on a CentOS Devel list is hot air going
    no place.

    FWIW, there's no package system that provides sufficient facilties to
    undertake
    a postgres upgrade reliably during upgrade that I'm aware of. Nor is it
    "recommended" afaik.

    But supply a pointer to your favorite package manager that _DOES_
    attempt
    postgres database upgrades and I'll be happy to attempt equivalent in
    RPM.

    Personally, I think that database upgrades have almost nothing
    to do with instaling packages, but I'd rather add whatever is useful
    than discuss well known RPM deficiencies for another decade.

    73 de Jeff

    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090122/59a45bad/attachment.bin
  • Les Mikesell at Jan 22, 2009 at 10:43 pm

    Jeff Johnson wrote:
    On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:

    Agreed, it doesn't work. Nor does any other RPM-managed update where
    you need to have both old and new packages simultaneously working for a
    while. The special case for the kernel is about the only place where it
    even attempts to keep old versions around for an emergency fallback.
    Honking about RPM deficiencies on a CentOS Devel list is hot air going
    no place.

    FWIW, there's no package system that provides sufficient facilties to
    undertake
    a postgres upgrade reliably during upgrade that I'm aware of. Nor is it
    "recommended" afaik.

    But supply a pointer to your favorite package manager that _DOES_ attempt
    postgres database upgrades and I'll be happy to attempt equivalent in RPM.

    Personally, I think that database upgrades have almost nothing
    to do with instaling packages, but I'd rather add whatever is useful
    than discuss well known RPM deficiencies for another decade.
    The reason the discussion is intertwined with packaging is that if you
    name the delivered files the same, the old and new can never co-exist as
    they should for the conversion and test period.

    I think the only way it can be done reasonably is to install the new
    code with different names and/or paths and scripts that can be run later
    to do the conversion and (after testing) replacement of the old version.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • Jeff Johnson at Jan 22, 2009 at 11:06 pm

    On Jan 22, 2009, at 5:43 PM, Les Mikesell wrote:


    Personally, I think that database upgrades have almost nothing
    to do with instaling packages, but I'd rather add whatever is useful
    than discuss well known RPM deficiencies for another decade.
    The reason the discussion is intertwined with packaging is that if you
    name the delivered files the same, the old and new can never co-
    exist as
    they should for the conversion and test period.
    In fact, the old <-> new can/do coexist during upgrade; lib/fsm.c in
    RPM has had
    a form of apply/commit since forever that puts the new in place (the
    apply) but
    does not remove the old (renaming into old is the commit).

    And there are provisions to rename the old into a subdirectory
    as part of committing the new; at least the necessary path
    name generation including a subdirectory has been there
    since forever in RPM. Adding the necessary logic to
    achieve whatever goal is desired installing files is just not that hard,
    the code is a state machine.

    Personally (and as I pointed out), including old files in the new
    package
    is likelier to be reliable, and has the additional benefit that whatever
    conversions are needed can be done anytime, not just during a "window"
    during upgrade where old <-> new coexist. A conversion side-effect at
    the scale of a database conversion is hugely complicated to guarantee
    reliability during a "window". Are you volunteering to test?
    I think the only way it can be done reasonably is to install the new
    code with different names and/or paths and scripts that can be run
    later
    to do the conversion and (after testing) replacement of the old
    version.
    You think, I know, is the difference.

    But as always
    Patches cheerfully accepted.

    And you *really* need to take this conversation to an RPM list instead.

    I'd add the CC, but I have no idea what RPM you wish to use.

    73 de Jeff
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090122/914c423e/attachment.bin
  • Les Mikesell at Jan 22, 2009 at 11:40 pm

    Jeff Johnson wrote:
    The reason the discussion is intertwined with packaging is that if you
    name the delivered files the same, the old and new can never co-exist as
    they should for the conversion and test period.
    In fact, the old <-> new can/do coexist during upgrade; lib/fsm.c in RPM
    has had
    a form of apply/commit since forever that puts the new in place (the
    apply) but
    does not remove the old (renaming into old is the commit).
    Co-existing, as in being stored somewhere isn't quite the point. They
    both have to be able to be run, find the correct libraries, etc, and the
    one currently known to work has to be found by other applications.
    And there are provisions to rename the old into a subdirectory
    as part of committing the new; at least the necessary path
    name generation including a subdirectory has been there
    since forever in RPM. Adding the necessary logic to
    achieve whatever goal is desired installing files is just not that hard,
    the code is a state machine.
    But RPM can't do it unless it can always succeed with no user/admin
    input. I don't believe that's possible.
    Personally (and as I pointed out), including old files in the new package
    is likelier to be reliable, and has the additional benefit that whatever
    conversions are needed can be done anytime, not just during a "window"
    during upgrade where old <-> new coexist.
    But you can't replace my current old files with new ones of the same
    name until you know they work. And you can't know that they work
    because you don't know what applications I have.
    A conversion side-effect at
    the scale of a database conversion is hugely complicated to guarantee
    reliability during a "window". Are you volunteering to test?
    Sure, if it is something like running a script, testing an app known not
    to work with 8.3 (I think some versions of OpenNMS would qualify) and
    then seeing if the back-out strategy works.
    I think the only way it can be done reasonably is to install the new
    code with different names and/or paths and scripts that can be run later
    to do the conversion and (after testing) replacement of the old version.
    You think, I know, is the difference.
    That's why I want the conversion to be scripted.
    But as always
    Patches cheerfully accepted.

    And you *really* need to take this conversation to an RPM list instead.
    It really doesn't have much to do with RPM. It has to do with naming
    the replacement files so they don't overwrite the things that have to
    remain.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • John Summerfield at Jan 23, 2009 at 1:26 am

    Jeff Johnson wrote:
    On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:

    Agreed, it doesn't work. Nor does any other RPM-managed update where
    you need to have both old and new packages simultaneously working for a
    while. The special case for the kernel is about the only place where it
    even attempts to keep old versions around for an emergency fallback.
    Honking about RPM deficiencies on a CentOS Devel list is hot air going
    no place.

    FWIW, there's no package system that provides sufficient facilties to
    undertake
    a postgres upgrade reliably during upgrade that I'm aware of. Nor is it
    "recommended" afaik.
    I thought that point was already conceded.

    However, there is nothing now that prevents two versions of postgresql
    from being built with version-dependent directory names (as it almost is):
    [root at numbat ~]# rpm -qvl postgresql | grep ^d
    drwxr-xr-x 2 root root 0 Jan 12 2008 /usr/lib/pgsql
    drwxr-xr-x 2 root root 0 Jan 12 2008
    /usr/share/doc/postgresql-8.1.11
    drwxr-xr-x 2 root root 0 Jan 12 2008
    /usr/share/doc/postgresql-8.1.11/html
    [root at numbat ~]#
    Change that to /usr/lib/pgsql-8.1.11, create a bin directory in there
    and use the alternatives system to choose the default.

    The configuration and data directory names need to be changed too.

    But supply a pointer to your favorite package manager that _DOES_ attempt
    postgres database upgrades and I'll be happy to attempt equivalent in RPM.

    Personally, I think that database upgrades have almost nothing
    to do with instaling packages, but I'd rather add whatever is useful
    than discuss well known RPM deficiencies for another decade.
    In-package (or upgrade-time) configuration conversion will always fail
    for some packages, but I see no reason that users shouldn't be able to
    run old and new versions of (at least) _some_ packages simultaneously.
    It would make upgrades easier for sysadmins with just a few systems to
    maintain - depending on needs they could upgrade a clone and test it and
    fix and document broken bits without having to start from scratch each time.

    --

    Cheers
    John

    -- spambait
    1aaaaaaa at coco.merseine.nu Z1aaaaaaa at coco.merseine.nu
    -- Advice
    http://webfoot.com/advice/email.top.php
    http://www.catb.org/~esr/faqs/smart-questions.html
    http://support.microsoft.com/kb/555375

    You cannot reply off-list:-)
  • John Summerfield at Jan 23, 2009 at 1:15 am

    Les Mikesell wrote:
    Jeff Johnson wrote:
    On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
    I'd just like to see a realistic approach to updates via packages.
    Reality check:

    You have a postgres upstream devel with years of experience
    packaging postgres and me both saying
    Don't attempt postgres database upgrades in packaging.

    But create your own virtuality reality approach if you want.
    I think you missed my point, which is that RPM packaging doesn't provide
    facilities for what needs to be done. Postgres upstream is just more
    honest than most in recognizing the problem. It's not the only thing
    that ever has non-backward-compatible updates.
    It's not hard to rebuild (some, at least) packages to use rpm's prefix
    option. It allows to relocate the package at install time.

    Doubtless Jeff will comment on the practicality of this, it's a feature
    that's been around for years and years, but I've not seen it used much.

    Best, though to have a complete test system where the entire application
    - OS, database, webserver and anything else required is tested as an
    integrated whole.

    It's getting easier with so many virtualisation choices, but even that
    aside most organisations of any size should be able to find an old
    Pentium IV or better to test on.

    --

    Cheers
    John

    -- spambait
    1aaaaaaa at coco.merseine.nu Z1aaaaaaa at coco.merseine.nu
    -- Advice
    http://webfoot.com/advice/email.top.php
    http://www.catb.org/~esr/faqs/smart-questions.html
    http://support.microsoft.com/kb/555375

    You cannot reply off-list:-)
  • Charlie Brady at Jan 22, 2009 at 6:19 pm

    On Thu, 22 Jan 2009, Joshua Kramer wrote:

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
    No, but I wouldn't choose to use those.
    PostgreSQL does ship with a dumper, pg_dump. If you have the current
    version of postmaster, then you use pg_dump to connect to that and dump
    your data in a version-agnostic format.
    I know all that.

    Thanks.
  • Hugo van der Kooij at Jan 23, 2009 at 6:42 am

    Joshua Kramer wrote:
    I consider that a major upstream bug.
    Better would be for postgresql to ship a standalone SQL dumper, which can
    read old file formats.
    Charlie,

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
    You windows users: Yes, they woukd

    Hugo

    - --
    hvdkooij at vanderkooij.org http://hugo.vanderkooij.org/
    PGP/GPG? Use: http://hugo.vanderkooij.org/0x58F19981.asc

    A: Yes.
    Q: Are you sure?
    A: Because it reverses the logical flow of conversation.
    Q: Why is top posting frowned upon?
    Bored? Click on http://spamornot.org/ and rate those images.

    Nid wyf yn y swyddfa ar hyn o bryd. Anfonwch unrhyw waith i'w gyfieithu.
  • Jeff Johnson at Jan 23, 2009 at 1:28 pm

    On Jan 23, 2009, at 1:42 AM, Hugo van der Kooij wrote:

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    Joshua Kramer wrote:
    I consider that a major upstream bug.
    Better would be for postgresql to ship a standalone SQL dumper,
    which can
    read old file formats.
    Charlie,

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for
    your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
    You windows users: Yes, they woukd
    Actually, its "commercial" vs "FLOSS" for the database that is the
    distinguishing attribute determining whether upgrades are simple
    in the above.

    Most FLOSS databases, like postgres, are harder to upgrade than
    "commercial" databases like Oracle.

    73 de Jeff
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: smime.p7s
    Type: application/pkcs7-signature
    Size: 4664 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090123/1c784b2e/attachment.bin
  • Charlie Brady at Jan 23, 2009 at 3:38 pm

    On Fri, 23 Jan 2009, Jeff Johnson wrote:
    On Jan 23, 2009, at 1:42 AM, Hugo van der Kooij wrote:

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
    You windows users: Yes, they woukd
    Actually, its "commercial" vs "FLOSS" for the database that is the
    distinguishing attribute determining whether upgrades are simple
    in the above.

    Most FLOSS databases, like postgres, are harder to upgrade than
    "commercial" databases like Oracle.
    And it will remain that way, until FLOSS developers consider it legitimate
    to wonder why it is that way, and consider how to improve the situation.
    From my point of view, what's egregious with packaged postgresql is that
    it allows you to "upgrade" a postgresql installation to a state where the
    data is no longer accessable. At the least, one should be able to dump the
    data to SQL after upgrade.

    There's been much discussion about what rpm can and cannot do. One thing
    rpm can do, however, is to run a pre-script which uses the files of a a
    previously installed version. A pre script could detect an upgrade from
    the old version which uses an incompatible backend format, and could
    then create the SQL dump (starting postmaster and waiting for it if
    required).

    I don't buy the arguments that changes in the supported SQL language make
    automated upgrades of the backend data impossible. Dump, upgrade,
    re-import couldn't work if that were the case.

    Thanks (over and out).

    ---
    Charlie
  • Les Mikesell at Jan 23, 2009 at 10:57 pm

    Charlie Brady wrote:
    From my point of view, what's egregious with packaged postgresql is that
    it allows you to "upgrade" a postgresql installation to a state where the
    data is no longer accessable. At the least, one should be able to dump the
    data to SQL after upgrade.

    There's been much discussion about what rpm can and cannot do. One thing
    rpm can do, however, is to run a pre-script which uses the files of a a
    previously installed version. A pre script could detect an upgrade from
    the old version which uses an incompatible backend format, and could
    then create the SQL dump (starting postmaster and waiting for it if
    required).
    Maybe. What happens if you run out of space? Or have to choose
    available space from different partitions or network mounts? Or you
    don't have the space for the reload in the new format? These are all
    likely scenarios for database machines.
    I don't buy the arguments that changes in the supported SQL language make
    automated upgrades of the backend data impossible. Dump, upgrade,
    re-import couldn't work if that were the case.
    They may work, but you can't assume that the applications will work
    unchanged on the new version, or that the applications are all part of
    the same upgrade. For example, anything that relied on the implict
    casts that were removed between 8.2 and 8.3 won't work, so you'll need
    to convert back when you find that out. This doesn't mean the
    conversion can't be automated, just that the operator may need to make a
    few choices along the way, including when it is safe to remove the old
    version.

    --
    Les Mikesell
    lesmikesell at gmail.com
  • John Summerfield at Feb 10, 2009 at 1:46 pm

    Joshua Kramer wrote:
    I consider that a major upstream bug.
    Better would be for postgresql to ship a standalone SQL dumper, which can
    read old file formats.
    Charlie,

    Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your
    major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?

    Any major database version upgrade requires the attention of a qualified
    DBA who knows how to test data and applications against the new DB
    version, and then dump/upgrade/restore.
    I used to work for SPL (Australia), in the early 80s. We were the
    Australian agent for Software AG, and sold and supported ADABAS and
    related software in Australian (and I think) NZ.

    When our clients upgraded from 3.2.x to 4.1.x the index structures
    changed (as you might expect, with improved algorithms and maybe
    increased capacity), but the data on disk was unaffected. In principle,
    going back or forward required no more than rebuilding indexes (and, of
    course, the attendant maintenance procedures etc).


    For example, PostgreSQL introduced some minor syntactical differences with
    8.3. If your application uses the features affected by these changes, it
    would be impossible to simply 'dump/restore' without some massaging of the
    data and the application.

    PostgreSQL does ship with a dumper, pg_dump. If you have the current
    The previous writer said "stand alone." That is not.





    --

    Cheers
    John

    -- spambait
    1aaaaaaa at coco.merseine.nu Z1aaaaaaa at coco.merseine.nu
    -- Advice
    http://webfoot.com/advice/email.top.php
    http://www.catb.org/~esr/faqs/smart-questions.html
    http://support.microsoft.com/kb/555375

    You cannot reply off-list:-)
  • Joshua Kramer at Jan 22, 2009 at 4:22 pm

    And then there's postgresql. One has to backup one's data before
    upgrading major postgresql releases and then restore into the new.
    Not to veer completely off-topic, but the PostgreSQL Development Group
    (PGDG) are very good about making RHEL packages. Unless the application
    you're using is constrained to the particular PG version supplied by the
    Upstream Provider, or you are paying the Upstream Provider for support and
    you want to stick with their packages - using the PGDG packages will
    provide you with more benefits than sticking with the OS-based packages,
    provided you can justify the time to dump-restore. There really isn't a
    compelling reason to stick with 8.1, as 8.3 has many performance benefits.

    Cheers,
    -Josh

    --

    -----
    http://www.globalherald.net/jb01
    GlobalHerald.NET, the Smarter Social Network! (tm)
  • John Summerfield at Feb 10, 2009 at 1:37 pm

    Charlie Brady wrote:
    On Fri, 16 Jan 2009, Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
    Alain PORTAL wrote:
    Are you really sure about that?
    It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0
    which was the first I tried). They do tend to be big updates, though.
    Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger
    changes.
    No confusion for me.
    I understood that upgrading X.Y -> X+1.0 is a bad idea
    I don't think it is a bad idea. I just think that sometimes there are
    some problems, or RedHat is not prepared to say that it will work.
    CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
    The biggest problem is when the upgrade is from wu-ftpd and wu-imap to
    vsftpd and cyrus-imapd or such. There's no good way to automate such a
    process.



    --

    Cheers
    John

    -- spambait
    1aaaaaaa at coco.merseine.nu Z1aaaaaaa at coco.merseine.nu
    -- Advice
    http://webfoot.com/advice/email.top.php
    http://www.catb.org/~esr/faqs/smart-questions.html
    http://support.microsoft.com/kb/555375

    You cannot reply off-list:-)
  • Hugo van der Kooij at Jan 18, 2009 at 12:40 pm

    Alain PORTAL wrote:
    Le vendredi 16 janvier 2009, Scott Silva a ?crit :
    When 5.3 is released there is a "magic" incantation that will transform
    your 5.2 into 5.3!

    Here it is;

    Are you ready?


    "yum update"

    Easy, huh?
    Are you really sure about that?
    When the server is at least an hour in the car away you tend to make
    sure before you do this.

    But I have guided Various Centos machines through the minor versions.
    - From 5.0 to 5.1 and 5.2 and with 4.2 all the steps to 4.7

    Never so much as a glitch.

    In fact a normal kernel security update in between versions is the only
    time I need to do a reboot of the hardware and keep my fingers crossed.

    Hugo.


    - --
    hvdkooij at vanderkooij.org http://hugo.vanderkooij.org/
    PGP/GPG? Use: http://hugo.vanderkooij.org/0x58F19981.asc

    A: Yes.
    Q: Are you sure?
    A: Because it reverses the logical flow of conversation.
    Q: Why is top posting frowned upon?
    Bored? Click on http://spamornot.org/ and rate those images.

    Nid wyf yn y swyddfa ar hyn o bryd. Anfonwch unrhyw waith i'w gyfieithu.
  • Andy Burns at Jan 20, 2009 at 10:20 pm

    2009/1/16 Alain PORTAL <alain.portal at univ-montp2.fr>:

    Does somebody know when RHEL 5.3 will be released?
    That'll be today ;-)

    http://www.redhat.com/about/news/prarchive/2009/rhel_5_3.html
  • Alain PORTAL at Jan 20, 2009 at 10:47 pm

    Le mardi 20 janvier 2009, Andy Burns a ?crit :
    2009/1/16 Alain PORTAL <alain.portal at univ-montp2.fr>:
    Does somebody know when RHEL 5.3 will be released?
    That'll be today ;-)

    http://www.redhat.com/about/news/prarchive/2009/rhel_5_3.html
    Well! Good news! ;-)

    --
    Les pages de manuel Linux en fran?ais
    http://manpagesfr.free.fr/
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: This is a digitally signed message part.
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20090120/078ba6ae/attachment.bin
  • Lamar Owen at Jan 22, 2009 at 5:43 pm

    Charlie Brady wrote:
    On Fri, 23 Jan 2009, John Summerfield wrote:

    And then there's postgresql. One has to backup one's data before
    upgrading major postgresql releases and then restore into the new.
    I consider that a major upstream bug.
    Upstream collectively disagrees with you. Even though a PostgreSQL core developer, Tom Lane, works for Red Hat and packages the RHEL PostgreSQL packages. Upgrading PostgreSQL is a HARD thing to do without using the documented 'dump-upgrade-initdb-restore' sequence (unless you want to get multiversion installs working, and use Slony to do the migration...good luck with the multiversion! Although Debian has that piece worked out, Debian can do things during install/upgrade in .deb packages that RPM's cannot do).
    However, at the least a %pre script should create an SQL dump before
    upgrading major releases, so user is not left with an unusable blob.
    Better would be for postgresql to ship a standalone SQL dumper, which can
    read old file formats.
    I maintained the 'PGDG' or community upstream RPM's for PostgreSQL for five years, from 1999 to 2004. Personal reasons caused me to hand that over to Devrim, the current RPM maintainer lead. So I've fought with this issue a long time (as John knows).

    A %pre scriptlet has no way of reliably detecting whether it is running under Anaconda during a media-fed upgrade or from a fully installed system. If the %pre scriptlet is running under Anaconda, it cannot do an SQL dump, which requires a major portion of the normal system to be present and running to complete. This is not the case in an Anaconda-mediated upgrade, during which many basic system services are simply not there.

    Then there is the disk space issue (making sure you don't run out). Oh, and compiled C functions.

    There was once a pg_upgrade program that could do some of this stuff; however, it can break in subtle ways. The PostgreSQL system catalogs that are part of the database contain things that are part of the core system, including functions, operators, and the like. And sometimes the actual tuple format changes from one release to another...

    Now, it has been a while since I last looked at the upgrading situation; if you want to learn more about it, read the archives of the pgsql-hackers list and search for the various and many upgrade discussions.

    As a user, I have a CentOS 4 system that at this point in time cannot be upgraded past its current PostgreSQL version due to the need to store photographs from Microsoft Access as large objects. The support for the method used is not in any subsequent version, and in fact doesn't work on anything but the version shipped with CentOS 4. We will have to recode the application to get it ported, unfortunately. Any automatic upgrade of this system would break, and break badly.

    The moral is that, as a server administrator, you must be ever diligent to make sure that 'upgraded' software doesn't dramatically break things; and you cannot rely on the OS upgrade to do it. (I'm thinking BIND, Apache modules, java versions, amavisd and sendmail, among other things that tend to break in upgrades....)

    --
    Lamar Owen
    Chief Information Officer
    Pisgah Astronomical Research Institute
    1 PARI Drive
    Rosman, NC 28772
    828-862-5554
    www.pari.edu

Related Discussions

People

Translate

site design / logo © 2022 Grokbase