FAQ
I worked up a small patch to support Terabyte setting for memory.
Which is OK, but it only works for 1TB, not for 2TB or above.

Which highlights that since we measure things in kB, we have an
inherent limit of 2047GB for our memory settings. It isn't beyond
belief we'll want to go that high, or at least won't be by end 2014
and will be annoying sometime before 2020.

Solution seems to be to support something potentially bigger than INT
for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
platform we're on.

Opinions?

--
  Simon Riggs http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services

Search Discussions

  • Gavin Flower at May 21, 2013 at 9:41 pm

    On 22/05/13 09:13, Simon Riggs wrote:
    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    Which highlights that since we measure things in kB, we have an
    inherent limit of 2047GB for our memory settings. It isn't beyond
    belief we'll want to go that high, or at least won't be by end 2014
    and will be annoying sometime before 2020.

    Solution seems to be to support something potentially bigger than INT
    for GUCs. So we can reclassify GUC_UNIT_MEMORY according to the
    platform we're on.

    Opinions?

    --
    Simon Riggs http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
    I suspect it should be fixed before it starts being a problem, for 2
    reasons:

      1. best to panic early while we have time
         (or more prosaically: doing it soon gives us more time to get it
         right without undue pressure)

      2. not able to cope with 2TB and above might put off companies with
         seriously massive databases from moving to Postgres

    Probably an idea to check what other values should be increased as well.


    Cheers,
    Gavin
  • Jeff Janes at Jun 18, 2013 at 4:06 am

    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.
    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".

    I tested several of the memory settings to see that it can be set and
    retrieved. I haven't tested actual execution as I don't have that kind of
    RAM.

    I don't see how it could have a performance impact, it passes make check
    etc., and I don't think it warrants a new regression test.

    I'll set it to ready for committer.

    Cheers,

    Jeff
  • Fujii Masao at Jun 18, 2013 at 4:11 pm

    On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes wrote:
    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".
    Looks good to me. But I found you forgot to change postgresql.conf.sample,
    so I changed it and attached the updated version of the patch.

    Barring any objection to this patch and if no one picks up this, I
    will commit this.

    Regards,

    --
    Fujii Masao
  • Simon Riggs at Jun 18, 2013 at 5:40 pm

    On 18 June 2013 17:10, Fujii Masao wrote:
    On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes wrote:
    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".
    Looks good to me. But I found you forgot to change postgresql.conf.sample,
    so I changed it and attached the updated version of the patch.

    Barring any objection to this patch and if no one picks up this, I
    will commit this.
    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only...

    Thank you both for adding to this patch. Since you've done that, it
    seems churlish of me to interrupt that commit.

    I will make a note to extend the support to higher values of TBs later.

    --
      Simon Riggs http://www.2ndQuadrant.com/
      PostgreSQL Development, 24x7 Support, Training & Services
  • Josh Berkus at Jun 18, 2013 at 5:45 pm

    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only...
    At the beginning of the CF, I do a sweep of patch files emailed to
    -hackers and not in the CF. I believe there were three such of yours,
    take a look at the CF list. Like I said, better to track them
    unnecessarily than to lose them.
    Thank you both for adding to this patch. Since you've done that, it
    seems churlish of me to interrupt that commit.
    Well, I think that someone needs to actually test doing a sort with,
    say, 100GB of RAM and make sure it doesn't crash. Anyone have a machine
    they can try that on?

    --
    Josh Berkus
    PostgreSQL Experts Inc.
    http://pgexperts.com
  • Stephen Frost at Jun 18, 2013 at 5:52 pm

    * Josh Berkus (josh@agliodbs.com) wrote:
    Well, I think that someone needs to actually test doing a sort with,
    say, 100GB of RAM and make sure it doesn't crash. Anyone have a machine
    they can try that on?
    It can be valuable to bump up work_mem well beyond the amount of system
    memory actually available on the system to get the 'right' plan to be
    chosen (which often ends up needing much less actual memory to run).

    I've used that trick on a box w/ 512GB of RAM and had near-100G PG
    backend processes which were doing hashjoins. Don't think I've ever had
    it try doing a sort w/ a really big work_mem.

      Thanks,

       Stephen
  • Simon Riggs at Jun 18, 2013 at 5:59 pm

    On 18 June 2013 18:45, Josh Berkus wrote:
    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only...
    At the beginning of the CF, I do a sweep of patch files emailed to
    -hackers and not in the CF. I believe there were three such of yours,
    take a look at the CF list. Like I said, better to track them
    unnecessarily than to lose them.
    Thanks. Please delete the patch marked "Batch API for After Triggers".
    All others are submissions by me.

    --
      Simon Riggs http://www.2ndQuadrant.com/
      PostgreSQL Development, 24x7 Support, Training & Services
  • Josh Berkus at Jun 18, 2013 at 6:09 pm

    On 06/18/2013 10:59 AM, Simon Riggs wrote:

    Thanks. Please delete the patch marked "Batch API for After Triggers".
    All others are submissions by me.
    The CF app doesn't permit deletion of patches, so I marked it "returned
    with feedback".

    --
    Josh Berkus
    PostgreSQL Experts Inc.
    http://pgexperts.com
  • Fujii Masao at Jun 18, 2013 at 9:57 pm

    On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs wrote:
    On 18 June 2013 17:10, Fujii Masao wrote:
    On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes wrote:
    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".
    Looks good to me. But I found you forgot to change postgresql.conf.sample,
    so I changed it and attached the updated version of the patch.

    Barring any objection to this patch and if no one picks up this, I
    will commit this.
    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only... Yes.
    Thank you both for adding to this patch. Since you've done that, it
    seems churlish of me to interrupt that commit.
    I was thinking that this is the infrastructure patch for your future
    proposal, i.e., support higher values of TBs. But if it interferes with
    your future proposal, of course I'm okay to drop this patch. Thought?

    Regards,

    --
    Fujii Masao
  • Simon Riggs at Jun 19, 2013 at 7:47 am

    On 18 June 2013 22:57, Fujii Masao wrote:
    On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs wrote:
    On 18 June 2013 17:10, Fujii Masao wrote:
    On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes wrote:
    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".
    Looks good to me. But I found you forgot to change postgresql.conf.sample,
    so I changed it and attached the updated version of the patch.

    Barring any objection to this patch and if no one picks up this, I
    will commit this.
    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only... Yes.
    Thank you both for adding to this patch. Since you've done that, it
    seems churlish of me to interrupt that commit.
    I was thinking that this is the infrastructure patch for your future
    proposal, i.e., support higher values of TBs. But if it interferes with
    your future proposal, of course I'm okay to drop this patch. Thought?
    Yes, please commit.

    --
      Simon Riggs http://www.2ndQuadrant.com/
      PostgreSQL Development, 24x7 Support, Training & Services
  • Fujii Masao at Jun 19, 2013 at 11:18 pm

    On Wed, Jun 19, 2013 at 4:47 PM, Simon Riggs wrote:
    On 18 June 2013 22:57, Fujii Masao wrote:
    On Wed, Jun 19, 2013 at 2:40 AM, Simon Riggs wrote:
    On 18 June 2013 17:10, Fujii Masao wrote:
    On Tue, Jun 18, 2013 at 1:06 PM, Jeff Janes wrote:
    On Tuesday, May 21, 2013, Simon Riggs wrote:

    I worked up a small patch to support Terabyte setting for memory.
    Which is OK, but it only works for 1TB, not for 2TB or above.

    I've incorporated my review into a new version, attached.

    Added "TB" to the docs, added the macro KB_PER_TB, and made "show" to print
    "1TB" rather than "1024GB".
    Looks good to me. But I found you forgot to change postgresql.conf.sample,
    so I changed it and attached the updated version of the patch.

    Barring any objection to this patch and if no one picks up this, I
    will commit this.
    In truth, I hadn't realised somebody had added this to the CF. It was
    meant to be an exploration and demonstration that further work was/is
    required rather than a production quality submission. AFAICS it is
    still limited to '1 TB' only... Yes.
    Thank you both for adding to this patch. Since you've done that, it
    seems churlish of me to interrupt that commit.
    I was thinking that this is the infrastructure patch for your future
    proposal, i.e., support higher values of TBs. But if it interferes with
    your future proposal, of course I'm okay to drop this patch. Thought?
    Yes, please commit.
    Committed.

    Regards,

    --
    Fujii Masao

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedMay 21, '13 at 9:14p
activeJun 19, '13 at 11:18p
posts12
users6
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase