Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records? I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data. I am having a problem on several
servers reading and more specifically dumping these records (table)
using pg_dump

Thanks

Search Discussions

  • Robert Haas at Jul 29, 2011 at 12:25 am

    On Thu, Jul 7, 2011 at 10:33 AM, wrote:
    Is there any guidelines to sizing work_mem, shared_bufferes and other
    configuration parameters etc., with regards to very large records?  I
    have a table that has a bytea column and I am told that some of these
    columns contain over 400MB of data.  I am having a problem on several
    servers reading and more specifically dumping these records (table)
    using pg_dump
    work_mem shouldn't make any difference to how well that performs;
    shared_buffers might, but there's no special advice for tuning it for
    large records vs. anything else. Large records just get broken up
    into small records, under the hood. At any rate, your email is a
    little vague about exactly what the problem is. If you provide some
    more detail you might get more help.

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-performance @
categoriespostgresql
postedJul 7, '11 at 2:38p
activeJul 29, '11 at 12:25a
posts2
users2
websitepostgresql.org
irc#postgresql

2 users in discussion

Jtkells: 1 post Robert Haas: 1 post

People

Translate

site design / logo © 2021 Grokbase