I did an audio interview today, and it is online now:

http://bsdtalk.blogspot.com/2006/02/bsdtalk015-interview-with-postgresql.html

--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

Search Discussions

  • David Fetter at Feb 8, 2006 at 8:41 am

    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:
    I did an audio interview today, and it is online now:

    http://bsdtalk.blogspot.com/2006/02/bsdtalk015-interview-with-postgresql.html
    Great interview. You hit a lot of the high points :)

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    somewhere? As far as converting them from shell to Perl, I'm sure
    you'll find a flock of volunteers to help.

    Cheers,
    D
    --
    David Fetter david@fetter.org http://fetter.org/
    phone: +1 415 235 3778

    Remember to vote!
  • Bruce Momjian at Feb 8, 2006 at 2:00 pm

    David Fetter wrote:
    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:
    I did an audio interview today, and it is online now:

    http://bsdtalk.blogspot.com/2006/02/bsdtalk015-interview-with-postgresql.html
    Great interview. You hit a lot of the high points :)

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    /contrib/pgupgrade
    somewhere? As far as converting them from shell to Perl, I'm sure
    you'll find a flock of volunteers to help.
    Yea, but the problem with modifying the disk pages is still a problem.

    --
    Bruce Momjian | http://candle.pha.pa.us
    pgman@candle.pha.pa.us | (610) 359-1001
    + If your life is a hard drive, | 13 Roberts Road
    + Christ can be your backup. | Newtown Square, Pennsylvania 19073
  • David Fetter at Feb 8, 2006 at 6:33 pm

    On Wed, Feb 08, 2006 at 09:00:46AM -0500, Bruce Momjian wrote:
    David Fetter wrote:
    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    /contrib/pg_upgrade
    I see it in the attic, but not in CVS TIP. Is there some way to get
    it back? Or should it go somewhere else until it's at least slightly
    working?
    somewhere? As far as converting them from shell to Perl, I'm sure
    you'll find a flock of volunteers to help.
    Yea, but the problem with modifying the disk pages is still a
    problem.
    I understand that not everybody will choose this path, but we've gone
    to a *lot* of trouble--and as you pointed out, have benefitted
    directly from the effort--to provide pointy-hair checkboxes like the
    Windows port. "In-place upgrade" is one of those checkboxes, and I'm
    pretty confident that getting it working will have at a minimum the
    same benefits to the rest of the code that making the Windows port
    did.

    Cheers,
    D
    --
    David Fetter david@fetter.org http://fetter.org/
    phone: +1 415 235 3778

    Remember to vote!
  • Andrew Dunstan at Feb 8, 2006 at 6:47 pm

    David Fetter wrote:
    On Wed, Feb 08, 2006 at 09:00:46AM -0500, Bruce Momjian wrote:

    David Fetter wrote:

    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    /contrib/pg_upgrade
    I see it in the attic, but not in CVS TIP. Is there some way to get
    it back? Or should it go somewhere else until it's at least slightly
    working?
    There is a pgfoundry project, but it appears to be dead:
    http://pgfoundry.org/projects/pgupgrade

    This would be a very fine project for someone to pick up (maybe one of
    the corporate supporters could sponsor someone to work on it?)

    cheers

    andrew
  • Josh Berkus at Feb 8, 2006 at 7:55 pm
    Andrew,
    This would be a very fine project for someone to pick up (maybe one of
    the corporate supporters could sponsor someone to work on it?)
    We looked at it for Greenplum but just couldn't justify putting it near
    the top of the priority list. The work/payoff ratio is terrible.

    One justification for in-place upgrades is to be faster than
    dump/reload. However, if we're assuming the possibility of new/modified
    header fields which could then cause page splits on pages which are 90%
    capacity, then this time savings would be on the order of no more than
    50% of load time, not the 90% of load time required to justify the
    programming effort involved -- especially when you take into account
    needing to provide multiple conversions, e.g. 7.3-->8.1, 7.4 --> 8.1, etc.

    The second reason for in-place upgrade is for large databases where the
    owner does not have enough disk space for two complete copies of the
    database. Again, this is not solvable; if we want in-place upgrade to
    be fault-tolerant, then we need the doubled disk space anyway (you could
    do a certain amount with compression, but you'd still need 150%-175%
    space so it's not much help).

    Overall, it would be both easier and more effective to write a Slony
    automation wrapper which does the replication, population, and
    switchover for you.

    --Josh
  • Neil Conway at Feb 8, 2006 at 8:29 pm

    On Wed, 2006-02-08 at 11:55 -0800, Josh Berkus wrote:
    One justification for in-place upgrades is to be faster than
    dump/reload. However, if we're assuming the possibility of new/modified
    header fields which could then cause page splits on pages which are 90%
    capacity, then this time savings would be on the order of no more than
    50% of load time
    Well, if you need to start shuffling heap tuples around, you also need
    to update indexes, in addition to rewriting all the heap pages. This
    would require work on the order of VACUUM FULL in the worst case, which
    is pretty expensive.

    However, we don't change the format of heap or index pages _that_ often.
    An in-place upgrade script that worked when the heap/index page format
    has not changed would still be valuable -- only the system catalog
    format would need to be modified.
    The second reason for in-place upgrade is for large databases where the
    owner does not have enough disk space for two complete copies of the
    database. Again, this is not solvable; if we want in-place upgrade to
    be fault-tolerant, then we need the doubled disk space anyway
    When the heap/index page format hasn't changed, we would only need to
    backup the system catalogs, which would be far less expensive.

    -Neil
  • Rick Gigger at Feb 8, 2006 at 8:31 pm

    On Feb 8, 2006, at 12:55 PM, Josh Berkus wrote:

    Andrew,
    This would be a very fine project for someone to pick up (maybe
    one of the corporate supporters could sponsor someone to work on it?)
    We looked at it for Greenplum but just couldn't justify putting it
    near the top of the priority list. The work/payoff ratio is terrible.

    One justification for in-place upgrades is to be faster than dump/
    reload. However, if we're assuming the possibility of new/modified
    header fields which could then cause page splits on pages which are
    90% capacity, then this time savings would be on the order of no
    more than 50% of load time, not the 90% of load time required to
    justify the programming effort involved -- especially when you take
    into account needing to provide multiple conversions, e.g. 7.3--
    8.1, 7.4 --> 8.1, etc.
    I just posted an idea for first upgrading a physical backup of the
    data directory that you would create when doing "Online backups" and
    then also altering the the WAL log records as they are applied during
    recovery. That way the actual load time might still be huge but
    since it could run in parallel with the running server it would
    probably eliminate 99% of the downtime. Would that be worth the effort?

    Also all the heavy lifting could be offloaded to a separate box while
    your production server just keeps running unaffected.
    The second reason for in-place upgrade is for large databases where
    the owner does not have enough disk space for two complete copies
    of the database. Again, this is not solvable; if we want in-place
    upgrade to be fault-tolerant, then we need the doubled disk space
    anyway (you could do a certain amount with compression, but you'd
    still need 150%-175% space so it's not much help).
    Yeah, anyone who has so much data that they need this feature but
    isn't willing to back it up is crazy. Plus disk space is cheap.
    Overall, it would be both easier and more effective to write a
    Slony automation wrapper which does the replication, population,
    and switchover for you.
    Now that is something that I would actually use. I think that a
    little bit of automation would greatly enhance the number of users
    using slony.

    Rick
  • Tom Lane at Feb 8, 2006 at 8:51 pm

    Josh Berkus writes:
    This would be a very fine project for someone to pick up (maybe one of
    the corporate supporters could sponsor someone to work on it?)
    We looked at it for Greenplum but just couldn't justify putting it near
    the top of the priority list. The work/payoff ratio is terrible.
    I agree that doing pgupgrade in full generality is probably not worth
    the investment required. However, handling the restricted case where
    no changes are needed in user tables or indexes would be considerably
    easier, and I think it would be worth doing.

    If such a tool were available, I don't think it'd be hard to get
    consensus on organizing our releases so that it were applicable more
    often than not. We could postpone changes that would affect user
    table contents until we'd built up a backlog that would all go into
    one release. Even a minimal commitment in that line would probably
    result in pgupgrade working for at least every other release, and
    that would be enough to make it worthwhile if you ask me ...

    regards, tom lane
  • Hannu Krosing at Feb 8, 2006 at 8:59 pm

    Ühel kenal päeval, K, 2006-02-08 kell 15:51, kirjutas Tom Lane:
    Josh Berkus <josh@agliodbs.com> writes:
    This would be a very fine project for someone to pick up (maybe one of
    the corporate supporters could sponsor someone to work on it?)
    We looked at it for Greenplum but just couldn't justify putting it near
    the top of the priority list. The work/payoff ratio is terrible.
    I agree that doing pgupgrade in full generality is probably not worth
    the investment required. However, handling the restricted case where
    no changes are needed in user tables or indexes would be considerably
    easier, and I think it would be worth doing.
    How hard would it be to modify postgres so that it can handle multiple
    heap page formats ?

    This could come handy for pgupgrade, but my real interest would be to
    have several task-specific formats supported even in non-upgrade
    situations, such as a more compact heap page format for read-only
    archive/analysis tables.

    --------------
    Hannu
  • Jeff Davis at Feb 11, 2006 at 5:10 am

    Hannu Krosing wrote:

    How hard would it be to modify postgres so that it can handle multiple
    heap page formats ?

    If the on-disk format is changed to add a feature (rather that for some
    performance reason), then that would mean that the feature would have to
    be available or not per disk page. Wouldn't that cause problems?

    Regards,
    Jeff Davis
  • Alvaro Herrera at Feb 12, 2006 at 8:37 pm

    Jeff Davis wrote:
    Hannu Krosing wrote:
    How hard would it be to modify postgres so that it can handle multiple
    heap page formats ?
    If the on-disk format is changed to add a feature (rather that for some
    performance reason), then that would mean that the feature would have to
    be available or not per disk page. Wouldn't that cause problems?
    Yeah, it would be problematic and difficult to handle. For example in
    subtransactions it would be a hassle to handle the 7.4 heap page format,
    maybe impossible without race conditions.

    --
    Alvaro Herrera http://www.CommandPrompt.com/
    The PostgreSQL Company - Command Prompt, Inc.
  • Josh Berkus at Feb 8, 2006 at 8:59 pm
    Tom,
    If such a tool were available, I don't think it'd be hard to get
    consensus on organizing our releases so that it were applicable more
    often than not. We could postpone changes that would affect user
    table contents until we'd built up a backlog that would all go into
    one release. Even a minimal commitment in that line would probably
    result in pgupgrade working for at least every other release, and
    that would be enough to make it worthwhile if you ask me ...
    We could even make that our first/second dot difference in the future.
    That is, 8.2 will be pg-upgradable from 8.1 but 9.0 will not.

    --
    --Josh

    Josh Berkus
    Aglio Database Solutions
    San Francisco
  • Bruce Momjian at Feb 9, 2006 at 3:53 am

    David Fetter wrote:
    On Wed, Feb 08, 2006 at 09:00:46AM -0500, Bruce Momjian wrote:
    David Fetter wrote:
    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    /contrib/pg_upgrade
    I see it in the attic, but not in CVS TIP. Is there some way to get
    it back? Or should it go somewhere else until it's at least slightly
    working?
    I think from cvsweb you can get to the Attic files.


    --
    Bruce Momjian | http://candle.pha.pa.us
    pgman@candle.pha.pa.us | (610) 359-1001
    + If your life is a hard drive, | 13 Roberts Road
    + Christ can be your backup. | Newtown Square, Pennsylvania 19073
  • Rick Gigger at Feb 8, 2006 at 7:38 pm

    On Feb 8, 2006, at 7:00 AM, Bruce Momjian wrote:

    David Fetter wrote:
    On Tue, Feb 07, 2006 at 11:43:40PM -0500, Bruce Momjian wrote:
    I did an audio interview today, and it is online now:

    http://bsdtalk.blogspot.com/2006/02/bsdtalk015-interview-with-
    postgresql.html
    Great interview. You hit a lot of the high points :)

    You mentioned in-place upgrade scripts. Are those in contrib/
    somewhere? On GBorg? On PgFoundry? If not, could you put them
    /contrib/pgupgrade
    somewhere? As far as converting them from shell to Perl, I'm sure
    you'll find a flock of volunteers to help.
    Yea, but the problem with modifying the disk pages is still a problem.
    Maybe this is totally crazy, but for those not using slony but are
    using incremental backup and want to upgrade without doing a time
    consuming dump / reload (this is not actually a problem for me as my
    data is not so large that a dump reload is a huge problem) would it
    be possible to apply pgupgrade to the physical backup before you
    restore, then also alter each WAL record as it is restored so that it
    restores all new pages in the new format.

    Then you could do all the work on a different box and quickly switch
    over to it after the restore is complete. You could eliminate most
    of the downtime.

    Is that even feasible? Not something that would help me now but it
    might make some people very happy (and maybe someday I will need it
    as well.)

    Rick
  • Robert Bernier at Feb 8, 2006 at 12:10 pm

    On Tuesday 07 February 2006 23:43, Bruce Momjian wrote:
    I did an audio interview today, and it is online now:


    http://bsdtalk.blogspot.com/2006/02/bsdtalk015-interview-with-postgresql.html
    Nice going Bruce. I noticed that both Dru and Dan Langille (creator of fresh ports) have also had interviews done on this site.


    cheers

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-advocacy @
categoriespostgresql
postedFeb 8, '06 at 4:43a
activeFeb 12, '06 at 8:37p
posts16
users11
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2022 Grokbase