Changing up the subject line because this is no longer a work in
progress nor is it pg_ping anymore.
On Sun, Jan 20, 2013 at 10:36 PM, Craig Ringer wrote:
On 01/21/2013 11:26 AM, Robert Haas wrote:
On Sun, Jan 20, 2013 at 2:59 PM, Phil Sorber wrote:
Ok. I can add something to the notes section of the docs. I can also
add some code comments for this and for grabbing the default params.
Sounds good.
I added the code comments, but realized it was already in the docs. I
will provide a separate patch for the PQping docs.

I also added the handling of extra params as Robert suggested.
Oh, I see. Is it really important to have the host and port in the
output, or should we trim that down to just e.g. "accepting
connections"? It seems useful to have that if a human is looking at
the output, but maybe not if a machine is looking at the output. And
if somebody doesn't want it, getting rid of it with sed or awk is
nontrivial - imagine:

pg_isready -d "/tmp:5432 - accepting connections"
If you are scripting I'd assume you would use the return code value.
It might be reasonable to make adding the host and port the verbose
method and have just "accepting connections" as the default output,
but my concern there is a misdiagnosis because someone doesn't realize
what server they are connecting to. This way they can't miss it and
they don't have to add another command line option to get that output.
It's a fair concern. Does anyone else have an opinion on this?
I have a strong view that the host and port *should* be printed in output.

One of the most common issues on Stack Overflow questions from new users
is with "I can't connect" problems that turn out to be them connecting
to the wrong host, port, or socket path.

It is absolutely vital that the unix socket path being used be printed
if unix socket connections are tested, as Mac OS X's insane setup means
that PostgreSQL tool builds on that platform regularly use the system
libpq not the user-installed PostgreSQL's libpq, and it defaults to a
different socket path. If users use pg_ping to verify that their
PostgreSQL instance is running it'll use the user-installed libpq -
fine, that's what we want. But the user will then wonder why the heck
Ruby on Rails's `pg' gem reports it can't connect to the unix socket.
They can't compare the unix socket paths printed in the error message if
pg_ping doesn't show it.

I would like to see pg_ping produce a similar error to psql on
connection failure.
As stated above this is no longer called pg_ping. Probably should have
switched the subject line a while ago.

I've left it outputting the host and port as default.

Also, we went over a lot of iterations on how the errors should look.
I am hesitant to change that now. I think the current errors are good
because not being able to connect isn't necessarily an unexpected
outcome like it is for psql.
$ psql -p 9999
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.9999"?

$ psql -h localhost -p 9999
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 9999?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 9999?

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Search Discussions

  • Robert Haas at Jan 23, 2013 at 4:07 pm

    On Mon, Jan 21, 2013 at 12:23 PM, Phil Sorber wrote:
    Changing up the subject line because this is no longer a work in
    progress nor is it pg_ping anymore.
    OK, I committed this. However, I have one suggestion. Maybe it would
    be a good idea to add a -c or -t option that sets the connect_timeout
    parameter. Because:

    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Phil Sorber at Jan 23, 2013 at 5:19 pm

    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    On Mon, Jan 21, 2013 at 12:23 PM, Phil Sorber wrote:
    Changing up the subject line because this is no longer a work in
    progress nor is it pg_ping anymore.
    OK, I committed this. However, I have one suggestion. Maybe it would
    be a good idea to add a -c or -t option that sets the connect_timeout
    parameter. Because:

    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Oh, hrmm. Yes, I will address that with a follow up patch. I guess in
    my testing I was using a host that responded properly with port
    unreachable or TCP RST or something.

    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Tom Lane at Jan 23, 2013 at 5:28 pm

    Phil Sorber writes:
    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.

        regards, tom lane
  • Bruce Momjian at Jan 23, 2013 at 6:58 pm

    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:

      from pg_ctl.c:

      #define DEFAULT_WAIT 60

    --
       Bruce Momjian <bruce@momjian.us> http://momjian.us
       EnterpriseDB http://enterprisedb.com

       + It's impossible for everything to be true. +
  • Phil Sorber at Jan 23, 2013 at 7:50 pm

    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:

    from pg_ctl.c:

    #define DEFAULT_WAIT 60
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    --
    Bruce Momjian <bruce@momjian.us> http://momjian.us
    EnterpriseDB http://enterprisedb.com

    + It's impossible for everything to be true. +


    --
    Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
    To make changes to your subscription:
    http://www.postgresql.org/mailpref/pgsql-hackers
  • Bruce Momjian at Jan 23, 2013 at 7:51 pm

    On Wed, Jan 23, 2013 at 02:50:01PM -0500, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:

    from pg_ctl.c:

    #define DEFAULT_WAIT 60
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    Yeah, being able to point to precedent is always helpful.

    --
       Bruce Momjian <bruce@momjian.us> http://momjian.us
       EnterpriseDB http://enterprisedb.com

       + It's impossible for everything to be true. +
  • Phil Sorber at Jan 23, 2013 at 8:15 pm

    On Wed, Jan 23, 2013 at 2:51 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 02:50:01PM -0500, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas wrote:
    [rhaas pgsql]$ pg_isready -h www.google.com
    <grows old, dies>
    Do you think we should have a default timeout, or only have one if
    specified at the command line?
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:

    from pg_ctl.c:

    #define DEFAULT_WAIT 60
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    Yeah, being able to point to precedent is always helpful.

    --
    Bruce Momjian <bruce@momjian.us> http://momjian.us
    EnterpriseDB http://enterprisedb.com

    + It's impossible for everything to be true. +
    Attached is the patch to add a connect_timeout.

    I also factored out the setting of user and dbname from the default
    gathering loop as we only need host and port defaults and made more
    sense to handle user/dbname in the same area of the code as
    connect_timeout. This was something mentioned by Robert upthread.

    This also coincidentally fixes a bug in the size of the keywords and
    values arrays. Must have added an option during review and not
    extended that array. Is there a best practice to making sure that
    doesn't happen in the future? I was thinking define MAX_PARAMS and
    then setting the array size to MAX_PARAMS+1 and then checking in the
    getopt loop to see how many params we expect to process and erroring
    if we are doing to many, but that only works at runtime.

    One other thing I noticed while refactoring the defaults gathering
    code. If someone passes in a connect string for dbname, we output the
    wrong info at the end. This is not addressed in this patch because I
    wanted to get some feedback before fixing and make a separate patch. I
    see the only real option being to use PQconninfoParse to get the
    params from the connect string. If someone passes in a connect string
    via dbname should that have precedence over other values passed in via
    individual command line options? Should ordering of the command line
    options matter?

    For example if someone did: pg_isready -h foo -d "host=bar port=4321" -p 1234

    What should the connection parameters be?
  • Tom Lane at Jan 23, 2013 at 11:08 pm

    Phil Sorber writes:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    I'm not sure that's a relevant precedent at all. What that number is
    is the time that pg_ctl will wait around for the postmaster to start or
    stop before reporting a problem --- and in either case, a significant
    delay (multiple seconds) is not surprising, because of crash-recovery
    work, shutdown checkpointing, etc. For pg_isready, you'd expect to get
    a response more or less instantly, wouldn't you? Personally, I'd decide
    that pg_isready is broken if it didn't give me an answer in a couple of
    seconds, much less a minute.

    What I had in mind was a default timeout of maybe 3 or 4 seconds...

        regards, tom lane
  • Phil Sorber at Jan 23, 2013 at 11:47 pm

    On Wed, Jan 23, 2013 at 6:07 PM, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    I'm not sure that's a relevant precedent at all. What that number is
    is the time that pg_ctl will wait around for the postmaster to start or
    stop before reporting a problem --- and in either case, a significant
    delay (multiple seconds) is not surprising, because of crash-recovery
    work, shutdown checkpointing, etc. For pg_isready, you'd expect to get
    a response more or less instantly, wouldn't you? Personally, I'd decide
    that pg_isready is broken if it didn't give me an answer in a couple of
    seconds, much less a minute.

    What I had in mind was a default timeout of maybe 3 or 4 seconds...
    I was thinking that if it was in a script you wouldn't care if it was
    60 seconds. If it was at the command line you would ^C it much
    earlier. I think the default linux TCP connection timeout is around 20
    seconds. My feeling is everyone is going to have a differing opinion
    on this, which is why I was hoping that some good precedent existed.
    I'm fine with 3 or 4, whatever can be agreed upon.
    regards, tom lane
  • Fujii Masao at Jan 24, 2013 at 1:12 am

    On Thu, Jan 24, 2013 at 8:47 AM, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 6:07 PM, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    I'm not sure that's a relevant precedent at all. What that number is
    is the time that pg_ctl will wait around for the postmaster to start or
    stop before reporting a problem --- and in either case, a significant
    delay (multiple seconds) is not surprising, because of crash-recovery
    work, shutdown checkpointing, etc. For pg_isready, you'd expect to get
    a response more or less instantly, wouldn't you? Personally, I'd decide
    that pg_isready is broken if it didn't give me an answer in a couple of
    seconds, much less a minute.

    What I had in mind was a default timeout of maybe 3 or 4 seconds...
    I was thinking that if it was in a script you wouldn't care if it was
    60 seconds. If it was at the command line you would ^C it much
    earlier. I think the default linux TCP connection timeout is around 20
    seconds. My feeling is everyone is going to have a differing opinion
    on this, which is why I was hoping that some good precedent existed.
    I'm fine with 3 or 4, whatever can be agreed upon.
    +1 with 3 or 4 secounds.

    Aside from this issue, I have one minor comment. ISTM you need to
    add the following line to the end of the help message. This line has
    been included in the help message of all the other client commands.

         Report bugs to <pgsql-bugs@postgresql.org>.

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Jan 24, 2013 at 4:45 pm

    On Wed, Jan 23, 2013 at 8:12 PM, Fujii Masao wrote:
    On Thu, Jan 24, 2013 at 8:47 AM, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 6:07 PM, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    I'm not sure that's a relevant precedent at all. What that number is
    is the time that pg_ctl will wait around for the postmaster to start or
    stop before reporting a problem --- and in either case, a significant
    delay (multiple seconds) is not surprising, because of crash-recovery
    work, shutdown checkpointing, etc. For pg_isready, you'd expect to get
    a response more or less instantly, wouldn't you? Personally, I'd decide
    that pg_isready is broken if it didn't give me an answer in a couple of
    seconds, much less a minute.

    What I had in mind was a default timeout of maybe 3 or 4 seconds...
    I was thinking that if it was in a script you wouldn't care if it was
    60 seconds. If it was at the command line you would ^C it much
    earlier. I think the default linux TCP connection timeout is around 20
    seconds. My feeling is everyone is going to have a differing opinion
    on this, which is why I was hoping that some good precedent existed.
    I'm fine with 3 or 4, whatever can be agreed upon.
    +1 with 3 or 4 secounds.

    Aside from this issue, I have one minor comment. ISTM you need to
    add the following line to the end of the help message. This line has
    been included in the help message of all the other client commands.

    Report bugs to <pgsql-bugs@postgresql.org>.
    Ok, I set the default timeout to 3 seconds, added the bugs email to
    the help, and also added docs that I forgot last time.

    Also, still hoping to get some feedback on my other issues.

    Thanks.
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Jan 24, 2013 at 6:12 pm

    On Fri, Jan 25, 2013 at 1:45 AM, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 8:12 PM, Fujii Masao wrote:
    On Thu, Jan 24, 2013 at 8:47 AM, Phil Sorber wrote:
    On Wed, Jan 23, 2013 at 6:07 PM, Tom Lane wrote:
    Phil Sorber <phil@omniti.com> writes:
    On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian wrote:
    On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
    +1 for default timeout --- if this isn't like "ping" where you are
    expecting to run indefinitely, I can't see that it's a good idea for it
    to sit very long by default, in any circumstance.
    FYI, the pg_ctl -w (wait) default is 60 seconds:
    Great. That is what I came to on my own as well. Figured that might be
    a sticking point, but as there is precedent, I'm happy with it.
    I'm not sure that's a relevant precedent at all. What that number is
    is the time that pg_ctl will wait around for the postmaster to start or
    stop before reporting a problem --- and in either case, a significant
    delay (multiple seconds) is not surprising, because of crash-recovery
    work, shutdown checkpointing, etc. For pg_isready, you'd expect to get
    a response more or less instantly, wouldn't you? Personally, I'd decide
    that pg_isready is broken if it didn't give me an answer in a couple of
    seconds, much less a minute.

    What I had in mind was a default timeout of maybe 3 or 4 seconds...
    I was thinking that if it was in a script you wouldn't care if it was
    60 seconds. If it was at the command line you would ^C it much
    earlier. I think the default linux TCP connection timeout is around 20
    seconds. My feeling is everyone is going to have a differing opinion
    on this, which is why I was hoping that some good precedent existed.
    I'm fine with 3 or 4, whatever can be agreed upon.
    +1 with 3 or 4 secounds.

    Aside from this issue, I have one minor comment. ISTM you need to
    add the following line to the end of the help message. This line has
    been included in the help message of all the other client commands.

    Report bugs to <pgsql-bugs@postgresql.org>.
    Ok, I set the default timeout to 3 seconds, added the bugs email to
    the help, and also added docs that I forgot last time. Thanks!
    Also, still hoping to get some feedback on my other issues.
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

         Try \"%s --help\" for more information.

    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

         $ pg_isready -d "port=9999"
         /tmp:5432 - no response

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Jan 24, 2013 at 7:10 pm

    On Thu, Jan 24, 2013 at 1:12 PM, Fujii Masao wrote:
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

    Try \"%s --help\" for more information.
    I've updated the patch to address these three issues. Attached.
    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?

    For example if someone did: pg_isready -h foo -d "host=bar port=4321" -p 1234

    What should the connection parameters be?
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Jan 25, 2013 at 4:20 pm

    On Fri, Jan 25, 2013 at 4:10 AM, Phil Sorber wrote:
    On Thu, Jan 24, 2013 at 1:12 PM, Fujii Masao wrote:
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

    Try \"%s --help\" for more information.
    I've updated the patch to address these three issues. Attached.
    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?
    If I read conninfo_array_parse() correctly, PQpingParams() prefer the
    option which is set to its keyword array later.

    Regards,

    --
    Fujii Masao
  • Pavel Stehule at Jan 26, 2013 at 9:03 am
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.

    Regards

    Pavel

    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?
    If I read conninfo_array_parse() correctly, PQpingParams() prefer the
    option which is set to its keyword array later.

    Regards,

    --
    Fujii Masao


    --
    Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
    To make changes to your subscription:
    http://www.postgresql.org/mailpref/pgsql-hackers
  • Phil Sorber at Jan 26, 2013 at 4:51 pm

    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done

    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Pavel Stehule at Jan 26, 2013 at 4:54 pm

    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.

    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Phil Sorber at Jan 26, 2013 at 4:56 pm

    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Pavel Stehule at Jan 26, 2013 at 5:40 pm

    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)

    Regards

    Pavel
    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Phil Sorber at Jan 26, 2013 at 9:07 pm

    On Sat, Jan 26, 2013 at 12:39 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)
    I guess I am still not clear why you can't do:

    stop_pg_via_puppet
    pg_isready
    while [ $? -ne 2 ]
       do
         sleep 1
         pg_isready
       done
    do_post_stop_things
    start_pg_via_puppet
    Regards

    Pavel
    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Pavel Stehule at Jan 26, 2013 at 9:37 pm

    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 12:39 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)
    I guess I am still not clear why you can't do:

    stop_pg_via_puppet
    pg_isready
    while [ $? -ne 2 ]
    do
    sleep 1
    pg_isready
    done
    do_post_stop_things
    start_pg_via_puppet
    because ! pg_isready <> pg_isdone
    Regards

    Pavel
    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Phil Sorber at Jan 26, 2013 at 10:20 pm

    On Sat, Jan 26, 2013 at 4:37 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 12:39 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)
    I guess I am still not clear why you can't do:

    stop_pg_via_puppet
    pg_isready
    while [ $? -ne 2 ]
    do
    sleep 1
    pg_isready
    done
    do_post_stop_things
    start_pg_via_puppet
    because ! pg_isready <> pg_isdone
    So you are proposing a different utility? Sorry, I thought you were
    proposing a new option to pg_isready. What would pg_isdone be testing
    for specifically? Is this something that would block until it has
    confirmed a shutdown?
    Regards

    Pavel
    Regards

    Pavel


    Perhaps with a counter to break out of the loop after some number of attempts.
    Regards

    Pavel
  • Craig Ringer at Jan 26, 2013 at 11:56 pm

    On 01/27/2013 06:20 AM, Phil Sorber wrote:
    On Sat, Jan 26, 2013 at 4:37 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 12:39 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)
    I guess I am still not clear why you can't do:

    stop_pg_via_puppet
    pg_isready
    while [ $? -ne 2 ]
    do
    sleep 1
    pg_isready
    done
    do_post_stop_things
    start_pg_via_puppet
    because ! pg_isready <> pg_isdone
    So you are proposing a different utility? Sorry, I thought you were
    proposing a new option to pg_isready. What would pg_isdone be testing
    for specifically? Is this something that would block until it has
    confirmed a shutdown?
    That's what it sounds like - confirming that PostgreSQL is really fully
    shut down.

    I'm not sure how you could do that over a protocol connection, myself.
    I'd just read the postmaster pid from the pidfile on disk and then `kill
    -0` it in a delay loop until the `kill` command returns failure. This
    could be a useful convenience utility but I'm not convinced it should be
    added to pg_isready because it requires local and possibly privileged
    execution, unlike pg_isready's network based operation. Privileges could
    be avoided by using an aliveness test other than `kill -0`, but you
    absolutely have to be local to verify that the postmaster has fully
    terminated - and it wouldn't make sense for a non-local process to care
    about this anyway.

    --
      Craig Ringer http://www.2ndQuadrant.com/
      PostgreSQL Development, 24x7 Support, Training & Services
  • Phil Sorber at Jan 27, 2013 at 12:16 am

    On Jan 26, 2013 6:56 PM, "Craig Ringer" wrote:
    On 01/27/2013 06:20 AM, Phil Sorber wrote:
    On Sat, Jan 26, 2013 at 4:37 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 12:39 PM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 11:53 AM, Pavel Stehule wrote:
    2013/1/26 Phil Sorber <phil@omniti.com>:
    On Sat, Jan 26, 2013 at 4:02 AM, Pavel Stehule wrote:
    Hello

    We now haw to solve small puppet issue, because our puppets try
    to
    start server too early, when old instance live still.

    Maybe some new parameter - is_done can be useful.
    What about something like:
    pg_isready; while [ $? -ne 2 ]; do sleep 1; pg_isready; done
    it is not enough - server is done in a moment, where can be started
    again - or when we can do safe copy of database data directory.
    I guess i am not completely understanding the case you are trying to
    solve. Can you explain a bit further?
    We use puppets and due some simplification we cannot to use reload
    when configuration is changed. Our puppets has not enough
    intelligence
    to understand when is reload enough and when is restart necessary. So
    any change to configuration require restarting postgres. I don't know
    why "service restart" are not used. I believe so our puppet guru know
    it. It just do sequence STOP:START Now puppets are "smart" and able
    to wait for time, when server is ready. But there are missing simple
    test if server is really done and I see a error messages related to
    too early try to start. So some important feature can be verification
    so server is really done.

    We can do it with test on pid file now - and probably we will use it.
    But I see so this is similar use case (in opposite direction)
    I guess I am still not clear why you can't do:

    stop_pg_via_puppet
    pg_isready
    while [ $? -ne 2 ]
    do
    sleep 1
    pg_isready
    done
    do_post_stop_things
    start_pg_via_puppet
    because ! pg_isready <> pg_isdone
    So you are proposing a different utility? Sorry, I thought you were
    proposing a new option to pg_isready. What would pg_isdone be testing
    for specifically? Is this something that would block until it has
    confirmed a shutdown?
    That's what it sounds like - confirming that PostgreSQL is really fully
    shut down.

    I'm not sure how you could do that over a protocol connection, myself.
    I'd just read the postmaster pid from the pidfile on disk and then `kill
    -0` it in a delay loop until the `kill` command returns failure. This
    could be a useful convenience utility but I'm not convinced it should be
    added to pg_isready because it requires local and possibly privileged
    execution, unlike pg_isready's network based operation. Privileges could
    be avoided by using an aliveness test other than `kill -0`, but you
    absolutely have to be local to verify that the postmaster has fully
    terminated - and it wouldn't make sense for a non-local process to care
    about this anyway.
    Maybe something to add to pg_ctl?
    --
    Craig Ringer http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Craig Ringer at Jan 27, 2013 at 12:30 am

    On 01/27/2013 08:16 AM, Phil Sorber wrote:
    Craig Ringer wrote:
    That's what it sounds like - confirming that PostgreSQL is really fully
    shut down.

    I'm not sure how you could do that over a protocol connection, myself.
    I'd just read the postmaster pid from the pidfile on disk and then `kill
    -0` it in a delay loop until the `kill` command returns failure. This
    could be a useful convenience utility but I'm not convinced it should be
    added to pg_isready because it requires local and possibly privileged
    execution, unlike pg_isready's network based operation. Privileges could
    be avoided by using an aliveness test other than `kill -0`, but you
    absolutely have to be local to verify that the postmaster has fully
    terminated - and it wouldn't make sense for a non-local process to care
    about this anyway.
    Maybe something to add to pg_ctl?
    That'd make a lot more sense than to pg_isready, yeah.

    --
      Craig Ringer http://www.2ndQuadrant.com/
      PostgreSQL Development, 24x7 Support, Training & Services
  • Tom Lane at Jan 27, 2013 at 5:46 pm

    Craig Ringer writes:
    That's what it sounds like - confirming that PostgreSQL is really fully
    shut down.
    I'm not sure how you could do that over a protocol connection, myself.
    I'd just read the postmaster pid from the pidfile on disk and then `kill
    -0` it in a delay loop until the `kill` command returns failure. This
    could be a useful convenience utility but I'm not convinced it should be
    added to pg_isready because it requires local and possibly privileged
    execution, unlike pg_isready's network based operation. Privileges could
    be avoided by using an aliveness test other than `kill -0`, but you
    absolutely have to be local to verify that the postmaster has fully
    terminated - and it wouldn't make sense for a non-local process to care
    about this anyway.
    This problem is actually quite a bit more difficult than it looks.
    In particular, the mere fact that the postmaster process is gone does
    not prove that the cluster is idle: it's possible that the postmaster
    crashed leaving orphan backends behind, and the orphans are still busily
    modifying on-disk state. A real postmaster knows how to check for that
    (by looking at the nattch count of the shmem segment cited in the old
    lockfile) but I can't see any shell script getting it right.

    So ATM I wouldn't trust any method short of "try to start a new
    postmaster and see if it works", which of course is not terribly helpful
    if your objective is to get to a stopped state.

    We could consider transposing the shmem logic into a new pg_ctl command.
    It might be better though to have a new switch in the postgres
    executable that just runs postmaster startup as far as detecting
    lockfile conflicts, and reports what it found (without ever launching
    any child processes that could confuse matters). Then "pg_ctl isdone"
    could be a frontend for that, instead of duplicating logic.

        regards, tom lane
  • Pavel Stehule at Jan 27, 2013 at 5:54 pm

    2013/1/27 Tom Lane <tgl@sss.pgh.pa.us>:
    Craig Ringer <craig@2ndquadrant.com> writes:
    That's what it sounds like - confirming that PostgreSQL is really fully
    shut down.
    I'm not sure how you could do that over a protocol connection, myself.
    I'd just read the postmaster pid from the pidfile on disk and then `kill
    -0` it in a delay loop until the `kill` command returns failure. This
    could be a useful convenience utility but I'm not convinced it should be
    added to pg_isready because it requires local and possibly privileged
    execution, unlike pg_isready's network based operation. Privileges could
    be avoided by using an aliveness test other than `kill -0`, but you
    absolutely have to be local to verify that the postmaster has fully
    terminated - and it wouldn't make sense for a non-local process to care
    about this anyway.
    This problem is actually quite a bit more difficult than it looks.
    In particular, the mere fact that the postmaster process is gone does
    not prove that the cluster is idle: it's possible that the postmaster
    crashed leaving orphan backends behind, and the orphans are still busily
    modifying on-disk state. A real postmaster knows how to check for that
    (by looking at the nattch count of the shmem segment cited in the old
    lockfile) but I can't see any shell script getting it right.

    So ATM I wouldn't trust any method short of "try to start a new
    postmaster and see if it works", which of course is not terribly helpful
    if your objective is to get to a stopped state.

    We could consider transposing the shmem logic into a new pg_ctl command.
    It might be better though to have a new switch in the postgres
    executable that just runs postmaster startup as far as detecting
    lockfile conflicts, and reports what it found (without ever launching
    any child processes that could confuse matters). Then "pg_ctl isdone"
    could be a frontend for that, instead of duplicating logic.
    +1

    Pavel
    regards, tom lane
  • Phil Sorber at Jan 27, 2013 at 7:38 pm

    On Fri, Jan 25, 2013 at 11:20 AM, Fujii Masao wrote:
    On Fri, Jan 25, 2013 at 4:10 AM, Phil Sorber wrote:
    On Thu, Jan 24, 2013 at 1:12 PM, Fujii Masao wrote:
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

    Try \"%s --help\" for more information.
    I've updated the patch to address these three issues. Attached.
    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?
    If I read conninfo_array_parse() correctly, PQpingParams() prefer the
    option which is set to its keyword array later.
    It would be really nice to expose conninfo_array_parse() or some
    wrapped version directly to a libpq consumer. Otherwise, I need to
    recreate this behavior in pg_isready.c.

    Thoughts on adding:
       PQconninfoOption *PQparamsParse(const char **keywords, const char
    **values, char **errmsg, bool use_defaults, int expand_dbname)
    or similar?

    Or perhaps there is a better way to accomplish this that I am not aware of?
    Regards,

    --
    Fujii Masao
  • Phil Sorber at Jan 27, 2013 at 7:48 pm

    On Sun, Jan 27, 2013 at 2:38 PM, Phil Sorber wrote:
    On Fri, Jan 25, 2013 at 11:20 AM, Fujii Masao wrote:
    On Fri, Jan 25, 2013 at 4:10 AM, Phil Sorber wrote:
    On Thu, Jan 24, 2013 at 1:12 PM, Fujii Masao wrote:
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

    Try \"%s --help\" for more information.
    I've updated the patch to address these three issues. Attached.
    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?
    If I read conninfo_array_parse() correctly, PQpingParams() prefer the
    option which is set to its keyword array later.
    It would be really nice to expose conninfo_array_parse() or some
    wrapped version directly to a libpq consumer. Otherwise, I need to
    recreate this behavior in pg_isready.c.

    Thoughts on adding:
    PQconninfoOption *PQparamsParse(const char **keywords, const char
    **values, char **errmsg, bool use_defaults, int expand_dbname)
    or similar?

    Or perhaps there is a better way to accomplish this that I am not aware of?
    It would also be nice to be able to pass user_defaults to PQconninfoParse().
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Jan 28, 2013 at 4:20 pm

    On Mon, Jan 28, 2013 at 4:47 AM, Phil Sorber wrote:
    On Sun, Jan 27, 2013 at 2:38 PM, Phil Sorber wrote:
    On Fri, Jan 25, 2013 at 11:20 AM, Fujii Masao wrote:
    On Fri, Jan 25, 2013 at 4:10 AM, Phil Sorber wrote:
    On Thu, Jan 24, 2013 at 1:12 PM, Fujii Masao wrote:
    set_pglocale_pgservice() should be called?

    I think that the command name (i.e., pg_isready) should be given to
    PQpingParams() as fallback_application_name. Otherwise, the server
    by default uses "unknown" as the application name of pg_isready.
    It's undesirable.

    Why isn't the following message output only when invalid option is
    specified?

    Try \"%s --help\" for more information.
    I've updated the patch to address these three issues. Attached.
    When the conninfo string including the hostname or port number is
    specified in -d option, pg_isready displays the wrong information
    as follows.

    $ pg_isready -d "port=9999"
    /tmp:5432 - no response
    This is what i asked about in my previous email about precedence of
    the parameters. I can parse that with PQconninfoParse, but what are
    the rules for merging both individual and conninfo params together?
    If I read conninfo_array_parse() correctly, PQpingParams() prefer the
    option which is set to its keyword array later.
    It would be really nice to expose conninfo_array_parse() or some
    wrapped version directly to a libpq consumer. Otherwise, I need to
    recreate this behavior in pg_isready.c.

    Thoughts on adding:
    PQconninfoOption *PQparamsParse(const char **keywords, const char
    **values, char **errmsg, bool use_defaults, int expand_dbname)
    or similar?
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Jan 28, 2013 at 4:25 pm

    On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao wrote:
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    Regards,

    --
    Fujii Masao
  • Alvaro Herrera at Jan 28, 2013 at 6:12 pm

    Phil Sorber escribió:
    On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao wrote:
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    I suggest duplicate the code for 9.3, and submit a patch to refactor
    into a new libpq function for CF2013-next. If the patch is simple
    enough, we can consider putting it into 9.3.

    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Phil Sorber at Jan 28, 2013 at 6:14 pm
    On Mon, Jan 28, 2013 at 1:12 PM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao wrote:
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    I suggest duplicate the code for 9.3, and submit a patch to refactor
    into a new libpq function for CF2013-next. If the patch is simple
    enough, we can consider putting it into 9.3.
    Ok, sounds good to me.
    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Fujii Masao at Jan 29, 2013 at 4:43 pm

    On Tue, Jan 29, 2013 at 3:12 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao wrote:
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    I suggest duplicate the code for 9.3, and submit a patch to refactor
    into a new libpq function for CF2013-next. If the patch is simple
    enough, we can consider putting it into 9.3.
    Agreed.

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Feb 3, 2013 at 2:55 am

    On Tue, Jan 29, 2013 at 11:43 AM, Fujii Masao wrote:
    On Tue, Jan 29, 2013 at 3:12 AM, Alvaro Herrera
    wrote:
    Phil Sorber escribió:
    On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao wrote:
    Maybe. But I'm not inclined to add new libpq interface at this stage.
    Because we are in the last CommitFest and I'm not sure whether
    we have enough time to implement that. Instead, how about using
    both PQconninfoParse() and PQconndefaults() like uri-regress.c does?
    Or just remove that output message? At least I don't think that the
    information about host and port needs to be output. Which would make
    the code very simpler.
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    I suggest duplicate the code for 9.3, and submit a patch to refactor
    into a new libpq function for CF2013-next. If the patch is simple
    enough, we can consider putting it into 9.3.
    Agreed.

    Regards,

    --
    Fujii Masao
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
  • Bruce Momjian at Feb 4, 2013 at 11:50 am

    On Sat, Feb 2, 2013 at 09:55:29PM -0500, Phil Sorber wrote:
    I think that output is important as do others up thread. I think it'd
    be simpler to just disable dbname's ability to double as conninfo. If
    we are looking for easy, that is.

    I don't mind duplicating the behavior from conninfo_array_parse() or
    uri-regress.c either. I can just put a comment that at some point in
    the future we should add a libpq interface for it.
    I suggest duplicate the code for 9.3, and submit a patch to refactor
    into a new libpq function for CF2013-next. If the patch is simple
    enough, we can consider putting it into 9.3.
    Agreed.

    Regards,

    --
    Fujii Masao
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    If we could run pg_isready on the patch, it would tell us if the patch
    is ready. Consider this a feature request. ;-)

    --
       Bruce Momjian <bruce@momjian.us> http://momjian.us
       EnterpriseDB http://enterprisedb.com

       + It's impossible for everything to be true. +
  • Robert Haas at Feb 5, 2013 at 11:41 am

    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?

    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Phil Sorber at Feb 5, 2013 at 2:02 pm

    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port. It works, but I don't
    really like it all that much, honestly. I also submitted a patch to
    add on to libpq to handle this, but Alvaro posed some questions I
    don't have good answers for. So I actually have another patch brewing
    that I actually like, but I need to put the finishing touches on. I
    plan on submitting that later this morning.
    --
    Robert Haas
    EnterpriseDB: http://www.enterprisedb.com
    The Enterprise PostgreSQL Company
  • Alvaro Herrera at Feb 5, 2013 at 2:06 pm

    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?

    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Phil Sorber at Feb 5, 2013 at 2:08 pm
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Phil Sorber at Feb 5, 2013 at 5:44 pm

    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.

    There still seems to be a bit of a disconnect in libpq in my opinion.
    Taking options as a string (URI or conninfo) or a set of arrays, but
    returning info about connection parameters in PQconninfoOption? And
    nothing that takes that as an input. Seems odd to me.
    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Phil Sorber at Feb 6, 2013 at 3:05 pm
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    Minor adjustment.
    There still seems to be a bit of a disconnect in libpq in my opinion.
    Taking options as a string (URI or conninfo) or a set of arrays, but
    returning info about connection parameters in PQconninfoOption? And
    nothing that takes that as an input. Seems odd to me.
    --
    Álvaro Herrera http://www.2ndQuadrant.com/
    PostgreSQL Development, 24x7 Support, Training & Services
  • Fujii Masao at Feb 6, 2013 at 4:11 pm

    On Thu, Feb 7, 2013 at 12:05 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    The patch looks complicated to me. I was thinking that we can address
    the problem
    just by using PQconninfoParse() and PQconndefaults() like uri-regress.c does.
    The patch should be very simple. Why do we need so complicated code?

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Feb 6, 2013 at 4:15 pm

    On Wed, Feb 6, 2013 at 11:11 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 12:05 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    The patch looks complicated to me. I was thinking that we can address
    the problem
    just by using PQconninfoParse() and PQconndefaults() like uri-regress.c does.
    The patch should be very simple. Why do we need so complicated code?
    Did you like the previous version better?

    http://www.postgresql.org/message-id/CADAkt-hnB3OhCpkR+PCg1c_Bjrsb7J__BPV+-jrjS5opjR2oww@mail.gmail.com
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Feb 6, 2013 at 4:36 pm

    On Thu, Feb 7, 2013 at 1:15 AM, Phil Sorber wrote:
    On Wed, Feb 6, 2013 at 11:11 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 12:05 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    The patch looks complicated to me. I was thinking that we can address
    the problem
    just by using PQconninfoParse() and PQconndefaults() like uri-regress.c does.
    The patch should be very simple. Why do we need so complicated code?
    Did you like the previous version better?

    http://www.postgresql.org/message-id/CADAkt-hnB3OhCpkR+PCg1c_Bjrsb7J__BPV+-jrjS5opjR2oww@mail.gmail.com
    Yes because that version is simpler. But which version is better depends on
    the reason why you implemented new version. If you have some idea about
    the merit and demerit of each version, could you elaborate them?

    + set_connect_options(connect_options, &pgdbname, &pghost,
    &pgport, &connect_timeout, &pguser);

    This code prevents us from giving options other than the above, for example
    application_name, in the conninfo. I think that pg_isready should accept all
    the libpq options.

    When more than one -d options are specified, psql always prefer the last one
    and ignore the others. OTOH, pg_isready with this patch seems to merge them.
    I'm not sure if there is specific rule about the priority order of -d
    option. But
    it seems better to follow the existing way, i.e., always prefer the
    last -d option.

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Feb 6, 2013 at 5:14 pm

    On Wed, Feb 6, 2013 at 11:36 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 1:15 AM, Phil Sorber wrote:
    On Wed, Feb 6, 2013 at 11:11 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 12:05 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    The patch looks complicated to me. I was thinking that we can address
    the problem
    just by using PQconninfoParse() and PQconndefaults() like uri-regress.c does.
    The patch should be very simple. Why do we need so complicated code?
    Did you like the previous version better?

    http://www.postgresql.org/message-id/CADAkt-hnB3OhCpkR+PCg1c_Bjrsb7J__BPV+-jrjS5opjR2oww@mail.gmail.com
    Yes because that version is simpler. But which version is better depends on
    the reason why you implemented new version. If you have some idea about
    the merit and demerit of each version, could you elaborate them?
    I didn't like the way that I had to hard code the options in the first
    one as you pointed out below. I also was looking through the code for
    something else and saw that a lot of the apps were starting with
    defaults then building from there, rather than trying to add the
    defaults at the end. I think they were still doing it wrong because
    they were using getenv() on their own rather than asking libpq for the
    defaults though. So the new version gets the defaults at the beginning
    and also makes it easy to add new params without changing function
    definitions.
    + set_connect_options(connect_options, &pgdbname, &pghost,
    &pgport, &connect_timeout, &pguser);

    This code prevents us from giving options other than the above, for example
    application_name, in the conninfo. I think that pg_isready should accept all
    the libpq options.
    I'm with you there. The new version fixes that as well.
    When more than one -d options are specified, psql always prefer the last one
    and ignore the others. OTOH, pg_isready with this patch seems to merge them.
    I'm not sure if there is specific rule about the priority order of -d
    option. But
    it seems better to follow the existing way, i.e., always prefer the
    last -d option.
    The problem I am having here is resolving the differences between
    different -d options and other command line options. For example:

    -h foo -p 4321 -d "host=bar port=1234" -d "host=baz"

    I would expect that to be 'baz:1234' but you are saying it should be 'baz:4321'?

    I look at -d as just a way to pass in multiple options (when you
    aren't strictly passing in dbname) and should be able to expand the
    above example to:

    -h foo -p 4321 -h bar -p 1234 -h baz

    If we hold off on parsing the value of -d until the end so we are sure
    we have the last one, then we might lose other parameters that were
    after the -d option. For example:

    -h foo -p 4321 -d "host=bar port=1234" -d "host=baz user=you" -U me

    Should this be 'me@baz:1234' or 'you@baz:4321' or 'me@baz:4321'?

    So we would have to track the last instance of a parameter as well as
    the order those final versions came in. Sound to me like there is no
    clear answer there, but maybe there is a project consensus that has
    already been reached with regard to this? Or some general computing
    wisdom that applies?
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Feb 8, 2013 at 5:46 pm

    On Thu, Feb 7, 2013 at 2:14 AM, Phil Sorber wrote:
    On Wed, Feb 6, 2013 at 11:36 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 1:15 AM, Phil Sorber wrote:
    On Wed, Feb 6, 2013 at 11:11 AM, Fujii Masao wrote:
    On Thu, Feb 7, 2013 at 12:05 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 12:44 PM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:08 AM, Phil Sorber wrote:
    On Tue, Feb 5, 2013 at 9:06 AM, Alvaro Herrera wrote:
    Phil Sorber escribió:
    On Tue, Feb 5, 2013 at 6:41 AM, Robert Haas wrote:
    On Sat, Feb 2, 2013 at 9:55 PM, Phil Sorber wrote:
    OK, here is the patch that handles the connection string in dbname.
    I'll post the other patch under a different posting because I am sure
    it will get plenty of debate on it's own.
    I'm sorry, can you remind me what this does for us vs. the existing coding?
    It's supposed to handle the connection string passed as dbname case to
    be able to get the right output for host:port.
    Surely the idea is that you can also give it a postgres:// URI, right?
    Absolutely.
    Here is it. I like this approach more than the previous one, but I'd
    like some feedback.
    The patch looks complicated to me. I was thinking that we can address
    the problem
    just by using PQconninfoParse() and PQconndefaults() like uri-regress.c does.
    The patch should be very simple. Why do we need so complicated code?
    Did you like the previous version better?

    http://www.postgresql.org/message-id/CADAkt-hnB3OhCpkR+PCg1c_Bjrsb7J__BPV+-jrjS5opjR2oww@mail.gmail.com
    Yes because that version is simpler. But which version is better depends on
    the reason why you implemented new version. If you have some idea about
    the merit and demerit of each version, could you elaborate them?
    I didn't like the way that I had to hard code the options in the first
    one as you pointed out below. I also was looking through the code for
    something else and saw that a lot of the apps were starting with
    defaults then building from there, rather than trying to add the
    defaults at the end. I think they were still doing it wrong because
    they were using getenv() on their own rather than asking libpq for the
    defaults though. So the new version gets the defaults at the beginning
    and also makes it easy to add new params without changing function
    definitions.
    + set_connect_options(connect_options, &pgdbname, &pghost,
    &pgport, &connect_timeout, &pguser);

    This code prevents us from giving options other than the above, for example
    application_name, in the conninfo. I think that pg_isready should accept all
    the libpq options.
    I'm with you there. The new version fixes that as well.
    When more than one -d options are specified, psql always prefer the last one
    and ignore the others. OTOH, pg_isready with this patch seems to merge them.
    I'm not sure if there is specific rule about the priority order of -d
    option. But
    it seems better to follow the existing way, i.e., always prefer the
    last -d option.
    The problem I am having here is resolving the differences between
    different -d options and other command line options. For example:

    -h foo -p 4321 -d "host=bar port=1234" -d "host=baz"

    I would expect that to be 'baz:1234' but you are saying it should be 'baz:4321'?

    I look at -d as just a way to pass in multiple options (when you
    aren't strictly passing in dbname) and should be able to expand the
    above example to:

    -h foo -p 4321 -h bar -p 1234 -h baz

    If we hold off on parsing the value of -d until the end so we are sure
    we have the last one, then we might lose other parameters that were
    after the -d option. For example:

    -h foo -p 4321 -d "host=bar port=1234" -d "host=baz user=you" -U me

    Should this be 'me@baz:1234' or 'you@baz:4321' or 'me@baz:4321'?

    So we would have to track the last instance of a parameter as well as
    the order those final versions came in. Sound to me like there is no
    clear answer there, but maybe there is a project consensus that has
    already been reached with regard to this?
    No maybe. But I think that all the client commands should follow the
    same rule. Otherwise a user would get confused when specifying
    options.

    Regards,

    --
    Fujii Masao
  • Phil Sorber at Feb 8, 2013 at 6:07 pm

    On Fri, Feb 8, 2013 at 12:46 PM, Fujii Masao wrote:
    No maybe. But I think that all the client commands should follow the
    same rule. Otherwise a user would get confused when specifying
    options.
    I would consider the rest of the apps using it as a consensus. I will
    make sure it aligns in behavior.
    Regards,

    --
    Fujii Masao
  • Phil Sorber at Feb 10, 2013 at 9:03 pm

    On Fri, Feb 8, 2013 at 1:07 PM, Phil Sorber wrote:
    On Fri, Feb 8, 2013 at 12:46 PM, Fujii Masao wrote:
    No maybe. But I think that all the client commands should follow the
    same rule. Otherwise a user would get confused when specifying
    options.
    I would consider the rest of the apps using it as a consensus. I will
    make sure it aligns in behavior.
    I've done as you suggested, and made sure they align with other
    command line utils. What I have found is that dbname is passed
    (almost) last in the param array so that it clobbers all previous
    values. I have made this patch as minimal as possible basing it off of
    master and not off of my previous attempt. For the record I still like
    the overall design of my previous attempt better, but I have not
    included a new version based on that here so as not to confuse the
    issue, however I would gladly do so upon request.

    Updated patch attached.
    Regards,

    --
    Fujii Masao
  • Fujii Masao at Jun 2, 2013 at 5:14 pm
    Sorry for the delay in responding to you.
    On Mon, Feb 11, 2013 at 6:03 AM, Phil Sorber wrote:
    On Fri, Feb 8, 2013 at 1:07 PM, Phil Sorber wrote:
    On Fri, Feb 8, 2013 at 12:46 PM, Fujii Masao wrote:
    No maybe. But I think that all the client commands should follow the
    same rule. Otherwise a user would get confused when specifying
    options.
    I would consider the rest of the apps using it as a consensus. I will
    make sure it aligns in behavior.
    I've done as you suggested, and made sure they align with other
    command line utils. What I have found is that dbname is passed
    (almost) last in the param array so that it clobbers all previous
    values. I have made this patch as minimal as possible basing it off of
    master and not off of my previous attempt. For the record I still like
    the overall design of my previous attempt better, but I have not
    included a new version based on that here so as not to confuse the
    issue, however I would gladly do so upon request.

    Updated patch attached.
    Thanks! The patch basically looks good to me.

    I updated the patch against current master and fixed some problems:

    - Handle the hostaddr in the connection string properly.
    - Remove the character 'V' from the third argument of getopt_long().
    - Handle the error cases of PQconninfoParse() and PQconndefaults().
    - etc...

    Please see the attached patch.

    Regards,

    --
    Fujii Masao

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedJan 21, '13 at 5:24p
activeJun 10, '13 at 6:08p
posts52
users8
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase