Hi,

I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
massive update, about 400000 updates/inserts.
If I execute about 100000 it seems all ok, but when I execute 400000, I have
the same problem with or without a transaction (I need to do with a
transaction) increase memory usage and disk usage.
With a execution of 400.000 inserts/update server begin woring well, but
after 100 seconds of executions increase usage of RAM, and then Swap and
finally all RAM and swap are used and execution can't finish.
I have made some tuning in server, I have modified:
-shared_buffers 1024 Mb
-work_mem 512 Mb
-effective_cache_size 2048Mb
-random_page_cost 2.0
-checkpoint_segments 64
-wal_buffers 8Mb
-max_prepared_transaction 100
-synchronous_commit off

what is wrong in this configuration to executes this inserts/update?

Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5


Thanks

Search Discussions

  • Andreas Kretschmer at Oct 27, 2010 at 6:58 pm

    Trenta sis wrote:


    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
    massive update, about 400000 updates/inserts.
    Updates or Inserts?

    If I execute about 100000 it seems all ok, but when I execute 400000, I have
    the same problem with or without a transaction (I need to do with a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well, but after
    100 seconds of executions increase usage of RAM, and then Swap and finally all
    RAM and swap are used and execution can't finish.
    I have made some tuning in server, I have modified:
    -shared_buffers 1024 Mb
    -work_mem 512 Mb
    Way too high, but that's not the problem here... (i guess, depends on
    the real query, see below about explain analyse)
    -effective_cache_size 2048Mb
    You have 4GB, but you are defined only 1 GByte for shared_mem and you
    have defined only 2GB for shared_mem and os-cache together. What about
    the other 2 GByte?

    -random_page_cost 2.0
    you have changed the default, why?

    -checkpoint_segments 64
    -wal_buffers 8Mb
    -max_prepared_transaction 100
    -synchronous_commit off

    what is wrong in this configuration to executes this inserts/update?
    Hard to guess, can you provide the output generated from
    EXPLAIN ANALYSE <your query>?

    Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5
    RAID5 isn't a good choise for a database server...


    Andreas
    --
    Really, I'm not out to destroy Microsoft. That will just be a completely
    unintentional side effect. (Linus Torvalds)
    "If I was god, I would recompile penguin with --enable-fly." (unknown)
    Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°
  • Craig Ringer at Oct 27, 2010 at 11:24 pm

    On 10/28/2010 02:38 AM, Trenta sis wrote:
    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems
    with a massive update, about 400000 updates/inserts.
    If I execute about 100000 it seems all ok, but when I execute 400000, I
    have the same problem with or without a transaction (I need to do with a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well, but
    after 100 seconds of executions increase usage of RAM, and then Swap and
    finally all RAM and swap are used and execution can't finish.
    Do you have lots of triggers on the table? Or foreign key relationships
    that're DEFERRABLE ?

    --
    Craig Ringer
  • Trenta sis at Oct 28, 2010 at 8:16 am
    There are about 100.000 inserts and 300000 updates. Without transaction it
    seems that works, but with a transaction no. Witt about only 300.000 updates
    it seems that can finish correctly, but last 20% is slow because is using
    swap...

    Any tunning to do in this configuration or it is correct?

    thanks

    2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
    On 10/28/2010 02:38 AM, Trenta sis wrote:


    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems
    with a massive update, about 400000 updates/inserts.
    If I execute about 100000 it seems all ok, but when I execute 400000, I
    have the same problem with or without a transaction (I need to do with a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well, but
    after 100 seconds of executions increase usage of RAM, and then Swap and
    finally all RAM and swap are used and execution can't finish.
    Do you have lots of triggers on the table? Or foreign key relationships
    that're DEFERRABLE ?

    --
    Craig Ringer
  • Cédric Villemain at Oct 28, 2010 at 3:38 pm
    2010/10/28 Trenta sis <trenta.sis@gmail.com>:

    There are about 100.000 inserts and 300000 updates. Without transaction it
    seems that works, but with a transaction no. Witt about only 300.000 updates
    it seems that can finish correctly, but last 20% is slow because is using
    swap...

    Any tunning to do in this configuration or it is correct?
    You should post your queries, and tables definitions involved.
    thanks

    2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
    On 10/28/2010 02:38 AM, Trenta sis wrote:

    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems
    with a massive update, about 400000 updates/inserts.
    If I execute about 100000 it seems all ok, but when I execute 400000, I
    have the same problem with or without a transaction (I need to do with a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well, but
    after 100 seconds of executions increase usage of RAM, and then Swap and
    finally all RAM and swap are used and execution can't finish.
    Do you have lots of triggers on the table? Or foreign key relationships
    that're DEFERRABLE ?

    --
    Craig Ringer


    --
    Cédric Villemain               2ndQuadrant
    http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support
  • Trenta sis at Oct 28, 2010 at 9:48 pm
    Well, I have solved executing with more RAM, and then works correctly

    Thanks



    2010/10/28 Cédric Villemain <cedric.villemain.debian@gmail.com>
    2010/10/28 Trenta sis <trenta.sis@gmail.com>:

    There are about 100.000 inserts and 300000 updates. Without transaction it
    seems that works, but with a transaction no. Witt about only 300.000 updates
    it seems that can finish correctly, but last 20% is slow because is using
    swap...

    Any tunning to do in this configuration or it is correct?
    You should post your queries, and tables definitions involved.
    thanks

    2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
    On 10/28/2010 02:38 AM, Trenta sis wrote:

    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems
    with a massive update, about 400000 updates/inserts.
    If I execute about 100000 it seems all ok, but when I execute 400000, I
    have the same problem with or without a transaction (I need to do with
    a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well,
    but
    after 100 seconds of executions increase usage of RAM, and then Swap
    and
    finally all RAM and swap are used and execution can't finish.
    Do you have lots of triggers on the table? Or foreign key relationships
    that're DEFERRABLE ?

    --
    Craig Ringer


    --
    Cédric Villemain 2ndQuadrant
    http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
  • Emanuele Bracci Poste at Oct 28, 2010 at 10:28 pm
    Scusa, scadenze a parte, ma non vi è sembrato il caso di chiedere a chi sta gestendo il progetto prima di rimuovere una risorsa?
    Grazie comunque.
    Emanuele

    Il giorno 28/ott/2010, alle ore 23.48, Trenta sis ha scritto:
    Well, I have solved executing with more RAM, and then works correctly

    Thanks



    2010/10/28 Cédric Villemain <cedric.villemain.debian@gmail.com>
    2010/10/28 Trenta sis <trenta.sis@gmail.com>:

    There are about 100.000 inserts and 300000 updates. Without transaction it
    seems that works, but with a transaction no. Witt about only 300.000 updates
    it seems that can finish correctly, but last 20% is slow because is using
    swap...

    Any tunning to do in this configuration or it is correct?
    You should post your queries, and tables definitions involved.
    thanks

    2010/10/28 Craig Ringer <craig@postnewspapers.com.au>
    On 10/28/2010 02:38 AM, Trenta sis wrote:

    Hi,

    I have a Linux Server (Debian) with Postgres 8.3 and I have problems
    with a massive update, about 400000 updates/inserts.
    If I execute about 100000 it seems all ok, but when I execute 400000, I
    have the same problem with or without a transaction (I need to do with a
    transaction) increase memory usage and disk usage.
    With a execution of 400.000 inserts/update server begin woring well, but
    after 100 seconds of executions increase usage of RAM, and then Swap and
    finally all RAM and swap are used and execution can't finish.
    Do you have lots of triggers on the table? Or foreign key relationships
    that're DEFERRABLE ?

    --
    Craig Ringer


    --
    Cédric Villemain 2ndQuadrant
    http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-performance @
categoriespostgresql
postedOct 27, '10 at 6:38p
activeOct 28, '10 at 10:28p
posts7
users5
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase