I am trying a simple access of a table and get an out of memory error. How do I avoid this issue. It
seems I have some configuration set wrong.

Our system has 24GB of memory and is dedicated to the postgres database.

Back ground information

aquarec=> explain analyze verbose select * from ens_memb;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------
Seq Scan on ens_memb (cost=0.00..719893.12 rows=32216212 width=62) (actual
time=4.954..37513.377 rows=32216154 loops=1)
Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
Total runtime: 39588.386 ms

#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------

# - Memory -

shared_buffers = 6144MB # min 128kB
# (change requires restart)
#temp_buffers = 8MB # min 800kB
max_prepared_transactions = 5 # zero disables the feature
# (change requires restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
work_mem = 48MB # min 64kB
maintenance_work_mem = 256MB # min 1MB
#max_stack_depth = 2MB # min 100kB

# - Kernel Resource Usage -

#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits

# - Background Writer -

#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching

#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------

# - Settings -

#fsync = on # turns forced synchronization on or off
#synchronous_commit = on # immediate fsync at commit
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
wal_buffers = 8MB # min 32kB
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds

#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000

# - Checkpoints -

checkpoint_segments = 32 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables

# - Archiving -

#archive_mode = off # allows archiving to be done
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables

#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------

# - Planner Method Configuration -

#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on

# - Planner Cost Constants -

#seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 3.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
effective_cache_size = 12288MB

# - Genetic Query Optimizer -

#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0

# - Other Planner Options -

default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses

Search Discussions

  • Merlin Moncure at Aug 30, 2011 at 6:52 pm

    On Mon, Aug 29, 2011 at 2:15 PM, Don wrote:
    I am trying a simple access of a table and get an out of memory error.  How
    do I avoid this issue.  It seems I have some configuration set wrong.

    Our system has 24GB of memory and is dedicated to the postgres database.

    Back ground information

    aquarec=> explain analyze verbose select * from ens_memb;
    QUERY
    PLAN
    --------------------------------------------------------------------------------------------------------------------------
    Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
    time=4.954..37513.377 rows=32216154 loops=1)
    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
    Total runtime: 39588.386 ms
    your problem is probably on the client. libpq unfortunately holds
    complete results in memory...you have to work around this.
    resolutions:
    *) switch to 64 bit psql/libpq if you haven't already
    *) use a cursor
    *) browse records client side ordering on index/key

    merlin

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-novice @
categoriespostgresql
postedAug 29, '11 at 8:16p
activeAug 30, '11 at 6:52p
posts2
users2
websitepostgresql.org
irc#postgresql

2 users in discussion

Don: 1 post Merlin Moncure: 1 post

People

Translate

site design / logo © 2022 Grokbase