Hi everybody

I need help in how to read sql commands from file instead of typing in
postgres (psql) command line. I mean I have a file which includes for
example a create table command and I want to load it to postgres created
database. Would you please help me with this.
Thanks,
Rosta


************************************
Rosta Farzan
Laboratory for Adaptive Hypermedia and Assistive Technologies
Department of Math and Computer Science CSU Hayward
rosta@acc.csuhayward.edu
(510) 885-4026
*************************************

Search Discussions

  • Devinder K Rajput at Jan 16, 2003 at 1:07 am
    Hi Rosta,

    try:
    psql db_name < file_name

    regards,

    Devinder Rajput
    Stores Division Corporate Offices
    Chicago, IL
    (773) 442-6474




    "Rosta Farzan" <rosta@sn432s03.sci.csuhayward.edu>
    Sent by: pgsql-novice-owner@postgresql.org
    01/15/2003 03:50 PM


    To: pgsql-novice@postgresql.org
    cc:
    Subject: [NOVICE] reading command from file


    Hi everybody

    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line. I mean I have a file which includes for
    example a create table command and I want to load it to postgres created
    database. Would you please help me with this.
    Thanks,
    Rosta


    ************************************
    Rosta Farzan
    Laboratory for Adaptive Hypermedia and Assistive Technologies
    Department of Math and Computer Science CSU Hayward
    rosta@acc.csuhayward.edu
    (510) 885-4026
    *************************************



    ---------------------------(end of broadcast)---------------------------
    TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
  • Rosta Farzan at Jan 16, 2003 at 7:10 pm
    Thanks everybody, I got it.

    Rosta

    Hi Rosta,

    try:
    psql db_name < file_name

    regards,

    Devinder Rajput
    Stores Division Corporate Offices
    Chicago, IL
    (773) 442-6474




    "Rosta Farzan" <rosta@sn432s03.sci.csuhayward.edu>
    Sent by: pgsql-novice-owner@postgresql.org
    01/15/2003 03:50 PM


    To: pgsql-novice@postgresql.org
    cc:
    Subject: [NOVICE] reading command from file


    Hi everybody

    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line. I mean I have a file which includes for
    example a create table command and I want to load it to postgres
    created database. Would you please help me with this.
    Thanks,
    Rosta

    ************************************
    Rosta Farzan
    Laboratory for Adaptive Hypermedia and Assistive Technologies
    Department of Math and Computer Science CSU Hayward
    rosta@acc.csuhayward.edu
    (510) 885-4026
    *************************************
  • Steve Crawford at Jan 16, 2003 at 1:07 am
    check "man psql"

    You want the -f (or --file) option, ie:
    psql -f mystuff.psql

    Cheers,
    Steve

    On Wednesday 15 January 2003 1:50 pm, Rosta Farzan wrote:
    Hi everybody

    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line. I mean I have a file which includes for
    example a create table command and I want to load it to postgres created
    database. Would you please help me with this.
    Thanks,
    Rosta


    ************************************
    Rosta Farzan
    Laboratory for Adaptive Hypermedia and Assistive Technologies
    Department of Math and Computer Science CSU Hayward
    rosta@acc.csuhayward.edu
    (510) 885-4026
    *************************************



    ---------------------------(end of broadcast)---------------------------
    TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
  • Lex Berezhny at Jan 16, 2003 at 1:11 am
    If the file containing your sql is named tables.sql you would do the
    following:

    $ psql -f tables.sql

    You can get all the other possible arguments to psql by typing:

    $ psql --help


    good luck!
    On Wed, 2003-01-15 at 16:50, Rosta Farzan wrote:
    Hi everybody

    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line. I mean I have a file which includes for
    example a create table command and I want to load it to postgres created
    database. Would you please help me with this.
    Thanks,
    Rosta


    ************************************
    Rosta Farzan
    Laboratory for Adaptive Hypermedia and Assistive Technologies
    Department of Math and Computer Science CSU Hayward
    rosta@acc.csuhayward.edu
    (510) 885-4026
    *************************************



    ---------------------------(end of broadcast)---------------------------
    TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
  • Jeffrey Melloy at Jan 16, 2003 at 1:22 am
    \i [filename].
    The path is relative to the directory you launch psql from, oddly
    enough.

    HTH
    Jeff
    On Wednesday, January 15, 2003, at 03:50 PM, Rosta Farzan wrote:

    Hi everybody

    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line. I mean I have a file which includes for
    example a create table command and I want to load it to postgres
    created
    database. Would you please help me with this.
    Thanks,
    Rosta


    ************************************
    Rosta Farzan
    Laboratory for Adaptive Hypermedia and Assistive Technologies
    Department of Math and Computer Science CSU Hayward
    rosta@acc.csuhayward.edu
    (510) 885-4026
    *************************************



    ---------------------------(end of
    broadcast)---------------------------
    TIP 1: subscribe and unsubscribe commands go to
    majordomo@postgresql.org
  • Thorsten Haude at Jan 16, 2003 at 7:48 am
    Hi,

    * Rosta Farzan [2003-01-15 22:50]:
    I need help in how to read sql commands from file instead of typing in
    postgres (psql) command line.
    Just write you commands in a file and pipe it in:
    psql <yourFile.pgsql


    Thorsten
    --
    He that would make his own liberty secure, must guard even
    his enemy from oppression; for if he violates this duty,
    he establishes a precedent which will reach to himself.
    - Thomas Paine
  • Thilo Hille at Jan 16, 2003 at 10:37 am
    Hello,
    I have a database running with postgres 7.2.3-1.
    Some tables gets feed with about 200000 records a day. A cronjob does
    "VACUUM ANALYSE" at night.
    This takes about an hour. In this time the performance gets real bad. After
    a short period the client (Apache running on another machine) hits the
    MAX_CONNECTION value and refuses any incoming connections. Systemload goes
    up to 20..
    Is there any option to get a vacuum-analyse less priority or should i
    upgrade to the latest version?
    excerpt form the postgresql.conf:

    shared_buffers =70000
    max_fsm_relations = 100
    max_fsm_pages = 2000
    sort_mem = 128
    vacuum_mem = 8192

    Thank you

    Thilo Hille
  • Ron Johnson at Jan 16, 2003 at 12:01 pm

    On Thu, 2003-01-16 at 04:37, Thilo Hille wrote:
    Hello,
    I have a database running with postgres 7.2.3-1.
    Some tables gets feed with about 200000 records a day. A cronjob does
    "VACUUM ANALYSE" at night.
    This takes about an hour. In this time the performance gets real bad. After
    a short period the client (Apache running on another machine) hits the
    MAX_CONNECTION value and refuses any incoming connections. Systemload goes
    up to 20..
    Is there any option to get a vacuum-analyse less priority or should i
    upgrade to the latest version?
    excerpt form the postgresql.conf:

    shared_buffers =70000
    max_fsm_relations = 100
    max_fsm_pages = 2000
    sort_mem = 128
    vacuum_mem = 8192
    Could you run the cron job more often? That way, you'd be spreading
    the pain and each VACUUM ANALYSE would be doing less work.

    --
    +------------------------------------------------------------+
    Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |
    Jefferson, LA USA http://members.cox.net/ron.l.johnson |
    "Basically, I got on the plane with a bomb. Basically, I |
    tried to ignite it. Basically, yeah, I intended to damage |
    the plane." |
    RICHARD REID, who tried to blow up American Airlines |
    Flight 63 |
    +------------------------------------------------------------+
  • Thilo Hille at Jan 16, 2003 at 12:36 pm

    Could you run the cron job more often? That way, you'd be spreading
    the pain and each VACUUM ANALYSE would be doing less work.
    It wouldnt change that much i suppose. I think the bottleneck is a table
    which contains about 2.5 million records.
    Even when running "vacuum analyse" twice without changing data in between
    the second takes also a lot of time.
    Also at night the usage of the Database is noticable less.
    When starting Vacuum @daytime i can instantly watch the number of
    postmasterclients and the sysload increasing very fast. Are the tables
    writelocked during vaccum?

    Thanks
    Thilo
  • Tom Lane at Jan 16, 2003 at 3:39 pm

    "Thilo Hille" <thilo@resourcery.de> writes:
    shared_buffers =70000
    max_fsm_relations = 100
    max_fsm_pages = 2000
    sort_mem = 128
    vacuum_mem = 8192
    If your DB is large enough that it takes an hour to run VACUUM, then
    those FSM parameters are surely way too small. I'd try something
    like 1000/1000000 for starters.

    Also, boosting vacuum_mem might help speed up VACUUM, if you have a
    reasonable amount of RAM in the box. (Instead of 8192 = 8Mb, try
    50000 or so.)

    BTW, what *is* the amount of RAM in the box? I'm eyeing the
    shared_buffers setting with suspicion. It may be too high.
    500Mb in shared buffers would very likely be more usefully spent
    elsewhere.

    It's very likely that the undersized FSM settings have caused the system
    to leak a lot of disk space, and that the only way to recover it will
    now be a VACUUM FULL. Which will be painful :-(. Can you show us the
    output of VACUUM VERBOSE for your larger tables?

    regards, tom lane
  • Thilo Hille at Jan 16, 2003 at 5:03 pm

    If your DB is large enough that it takes an hour to run VACUUM, then
    those FSM parameters are surely way too small. I'd try something
    like 1000/1000000 for starters.
    Also, boosting vacuum_mem might help speed up VACUUM, if you have a
    reasonable amount of RAM in the box. (Instead of 8192 = 8Mb, try
    50000 or so.)
    Done. Thanks.
    I think it would run faster if there werent all those clients eating up cpu
    and memory.
    During vacuum i noticed 105 concurrent clients bothering the database.
    BTW, what *is* the amount of RAM in the box? I'm eyeing the
    shared_buffers setting with suspicion. It may be too high.
    500Mb in shared buffers would very likely be more usefully spent
    elsewhere.
    Amount of RAM is 1GB. At the moment postgres runs nearly alone on the
    machine.
    I believed it would be a good idea to set the shared buffers as high as
    possible....
    i lowered them to 50000.
    It's very likely that the undersized FSM settings have caused the system
    to leak a lot of disk space, and that the only way to recover it will
    now be a VACUUM FULL. Which will be painful :-(. Can you show us the
    output of VACUUM VERBOSE for your larger tables?
    Here it comes.
    Note: The FSM Values are 1000/1000000 & Vaccum mem increased to 50000.
    Seems to be faster now.
    _____________Vacuum output for larger tables:_____________________________
    # VACUUM VERBOSE user_log;
    NOTICE: --Relation user_log--
    NOTICE: Pages 232907: Changed 0, Empty 0; Tup 2841267: Vac 0, Keep 0,
    UnUsed 18478387
    .
    Total CPU 17.89s/1.38u sec elapsed 386.31 sec.
    VACUUM
    # VACUUM VERBOSE fullstatistic;
    NOTICE: --Relation fullstatistic--
    NOTICE: Index fullstatistic_day_pleid_preid_i: Pages 9631; Tuples 91227:
    Deleted 8761
    1.
    CPU 0.42s/0.64u sec elapsed 55.21 sec.
    NOTICE: Removed 87611 tuples in 2382 pages.
    CPU 0.03s/0.31u sec elapsed 8.02 sec.
    NOTICE: Pages 118815: Changed 895, Empty 0; Tup 90646: Vac 87611, Keep 167,
    UnUsed 87
    77533.
    Total CPU 9.15s/1.76u sec elapsed 321.46 sec.
    VACUUM
    __________________________________________________________________
    Are there other drawbacks but wasted diskspace?
    Could you be more specific about the term 'painful'? *fear!*
    VACUUM FULL will completely lock the tables?


    Thank you
    Thilo
  • Tom Lane at Jan 16, 2003 at 5:47 pm

    "Thilo Hille" <thilo@resourcery.de> writes:
    BTW, what *is* the amount of RAM in the box? I'm eyeing the
    shared_buffers setting with suspicion. It may be too high.
    500Mb in shared buffers would very likely be more usefully spent
    elsewhere.
    Amount of RAM is 1GB. At the moment postgres runs nearly alone on the
    machine.
    I believed it would be a good idea to set the shared buffers as high as
    possible....
    i lowered them to 50000.
    Nope. There is a faction that thinks shared buffers should be as high
    as possible, and there is another faction that favors keeping them
    relatively small (I belong to the latter camp). But both factions agree
    that shared buffers near 50% of physical RAM is the worst scenario.
    When you do that, what really happens is that most disk pages end up
    being buffered twice: once in Postgres shared buffers and once in kernel
    disk cache. That's just wasting RAM. You either give shared buffers
    the bulk of RAM (and squeeze out kernel caching) or trim them down and
    rely on the kernel to do most of the disk caching.

    I'd cut shared_buffers to 10000, or maybe even less. I think you get
    to the point of diminishing returns with more than a few thousand of 'em.
    IMHO it's better to give the kernel the flexibility of assigning memory
    to processes or kernel disk cache as needed. You'll cope better with
    load spikes if the kernel has memory to spare. I suspect part of the
    reason your performance is going into the ground is the kernel is being
    forced to resort to swapping (you could check this with iostat or vmstat).

    # VACUUM VERBOSE user_log;
    NOTICE: --Relation user_log--
    NOTICE: Pages 232907: Changed 0, Empty 0; Tup 2841267: Vac 0, Keep 0,
    UnUsed 18478387
    That doesn't seem too horrible: roughly 12 tuples per page. However, if
    they're short rows then maybe there is a lot of free space there.
    # VACUUM VERBOSE fullstatistic;
    NOTICE: --Relation fullstatistic--
    NOTICE: Pages 118815: Changed 895, Empty 0; Tup 90646: Vac 87611, Keep 167,
    UnUsed 8777533.
    Here you are hurting: less than one tuple per page. This table
    desperately needs a VACUUM FULL.
    Could you be more specific about the term 'painful'? *fear!*
    VACUUM FULL will completely lock the tables?
    Yes, it will.

    Now, if you expect the amount of stored data to grow over time, maybe
    you could just sit tight and wait for the tables to fill up. But if you
    want to reduce the amount of disk space occupied today, you need a
    VACUUM FULL.

    regards, tom lane
  • Thilo Hille at Jan 16, 2003 at 6:49 pm

    I'd cut shared_buffers to 10000, or maybe even less. I think you get
    to the point of diminishing returns with more than a few thousand of 'em.
    IMHO it's better to give the kernel the flexibility of assigning memory
    to processes or kernel disk cache as needed. You'll cope better with
    load spikes if the kernel has memory to spare. I suspect part of the
    reason your performance is going into the ground is the kernel is being
    forced to resort to swapping (you could check this with iostat or vmstat).
    ok, ill try this.
    Would it make sense to lower the shmall & shmmax kernelvalues according to
    the buffers memory too?
    # VACUUM VERBOSE fullstatistic;
    NOTICE: --Relation fullstatistic--
    NOTICE: Pages 118815: Changed 895, Empty 0; Tup 90646: Vac 87611, Keep
    167,
    UnUsed 8777533.
    Here you are hurting: less than one tuple per page. This table
    desperately needs a VACUUM FULL.
    How about a table dump and restore. Would it solve the problem without a
    full vac?
    But anyway both tables are growing. So ill let them fill it by time.
    Diskspace is not (yet) spare on that machine.

    Thank you
    Thilo
  • Tom Lane at Jan 16, 2003 at 9:15 pm

    "Thilo Hille" <thilo@resourcery.de> writes:
    Would it make sense to lower the shmall & shmmax kernelvalues according to
    the buffers memory too?
    Don't think it matters.
    Here you are hurting: less than one tuple per page. This table
    desperately needs a VACUUM FULL.
    How about a table dump and restore. Would it solve the problem without a
    full vac?
    Yeah, but if your objective is to keep the table accessible
    continuously, that's not going to help ...

    regards, tom lane

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-novice @
categoriespostgresql
postedJan 15, '03 at 11:53p
activeJan 16, '03 at 9:15p
posts15
users9
websitepostgresql.org
irc#postgresql

People

Translate

site design / logo © 2022 Grokbase